dbt™ and BigQuery on Paradime: Service Account Setup in Minutes
Feb 26, 2026
dbt™ BigQuery Setup: The Complete Guide to a Secure, Production-Ready Connection with Paradime
Setting up dbt™ with BigQuery shouldn't feel like threading a needle in the dark. Between wrangling gcloud auth locally, configuring profiles.yml across machines, and praying your service account has the right IAM roles — teams lose hours before writing a single model.
This guide walks you through a safe, repeatable dbt™ BigQuery setup using Paradime. You'll learn how to apply the principle of least privilege from day one, cleanly separate development from production, and validate everything before your first dbt run.
Whether you're an analytics engineer connecting BigQuery for the first time, or a platform team standardizing your org's dbt™ setup, this guide has you covered.
Why Use Paradime for BigQuery dbt™ Setup?
Most dbt™ BigQuery setup guides start with local CLI installation, gcloud auth, and hand-editing profiles.yml. That approach works for solo developers, but it introduces friction and risk as teams grow.
Paradime eliminates these pain points by providing a managed, browser-based dbt™ development environment with native BigQuery integration.
Skip Local gcloud Auth + profiles.yml
With dbt Core™ installed locally, every developer must:
Install the
gcloudCLIRun
gcloud auth application-default loginManually configure
~/.dbt/profiles.ymlwith the correct project, dataset, and credentials
Here's what a typical local profiles.yml looks like for BigQuery:
Every developer needs access to that keyfile, must keep the path consistent, and has to update the file when anything changes. This does not scale.
With Paradime, the connection is configured once at the workspace level through the UI. Developers log in and start working — no local setup, no credential files floating around on laptops.
Separate Dev and Prod Targets Cleanly
One of the most critical (and commonly botched) aspects of any dbt™ BigQuery setup is the separation between development and production environments.
Paradime enforces this separation architecturally:
Code IDE connections target your dev dataset (e.g.,
dbt_john,dbt_jane)Bolt Scheduler connections target your prod dataset (e.g.,
analytics,dbt_prod)
Figure 1: Paradime enforces environment separation between Code IDE (dev) and Bolt Scheduler (prod) connections.
Because these are separate connection configurations with separate credentials, there's zero risk of a developer accidentally writing to production during ad-hoc development.
Prerequisites (BigQuery + IAM)
Before touching Paradime, you need a properly configured Google Cloud project with the right IAM guardrails. This section follows the principle of least privilege — granting only the permissions dbt™ actually needs.
Create/Select a GCP Project and Datasets
Open the BigQuery Console and select or create a GCP project.
Create your BigQuery datasets. In BigQuery, datasets are the equivalent of schemas. You'll need at least:
⚠️ Important: All datasets that dbt™ will reference in a single run must be in the same location (e.g.,
USmulti-region). MixingUSandEUdatasets in one query causes anotFound/ location mismatch error.
Create a Service Account
Create a dedicated service account for dbt™ — don't reuse a personal Google account or a shared admin account.
🔒 Security tip: Treat this JSON key like a password. Never commit it to Git. Upload it directly to Paradime's encrypted connection settings.
Minimum IAM Roles to Run dbt™ (Dataset-Level Permissions)
dbt™ needs to run jobs and read/write data. The minimum IAM roles are:
IAM Role | Purpose | Scope |
|---|---|---|
BigQuery Job User ( | Run queries and jobs | Project-level |
BigQuery Data Editor ( | Create/update/delete tables in datasets | Dataset-level or project-level |
For dataset-level grants (recommended for least privilege), use the BigQuery Console or a SQL DCL statement:
Figure 2: Least-privilege IAM strategy — Job User at the project level and Data Editor scoped to individual datasets.
Why not just grant
BigQuery Admin? Because that violates least privilege.BigQuery Admincan delete datasets, manage access controls, and modify project-level settings — none of which dbt™ needs.
Create the BigQuery Connection in Paradime
With your GCP project and service account ready, it's time to wire everything together in Paradime.
Settings → Connections
Click Settings in the top menu bar of the Paradime interface
Click "Connections" in the left sidebar
Click "Add New" next to Code IDE Environment
Select "BigQuery"
Upload/Enter Service Account JSON Securely
Paradime offers two authentication methods:
Service Account JSON (recommended for teams) — Upload your
.jsonkey file directly. Paradime stores it encrypted and never exposes it to individual developers.BigQuery OAuth — Each developer authorizes via their Google account. Better for personal dev sandboxes, but requires each user to manage their own scopes.
For most teams, Service Account JSON is the right choice because:
A single credential is managed centrally
Key rotation happens in one place
Developers never see or handle credentials
Set Project, Default Dataset, and Location
Fill in the workspace-level fields:
Field | Description | Example |
|---|---|---|
Profile Name | Must match |
|
Target | Environment name |
|
Dataset Location | Must match your BigQuery dataset region |
|
Project ID | Your GCP project identifier |
|
Service Account JSON | The uploaded key file |
|
And the user-level fields (each developer sets their own):
Field | Description | Example |
|---|---|---|
Dataset | Developer's personal dev dataset |
|
Threads | Parallel execution threads |
|
💡 Pro tip: The Dataset Location field in Paradime must match your BigQuery dataset's actual region. If your datasets are in
US, set location toUS. Mismatches cause crypticnotFounderrors at runtime.
Configure Development (Code IDE)
The Code IDE is where developers write, test, and iterate on dbt™ models. Getting the development configuration right prevents collisions and keeps production safe.
Per-Developer Schema/Dataset Strategy
The gold standard for multi-developer dbt™ projects is per-developer datasets. Each developer gets their own BigQuery dataset (configured as their dataset in the Paradime user-level connection settings):
John →
dbt_johnJane →
dbt_janeCI →
dbt_pr_123
When dbt™'s default generate_schema_name macro runs, it concatenates the target schema with any custom schema you define in your models. For a model configured with schema: staging running under John's connection:
This means every developer's materializations land in isolated datasets with zero risk of overwriting each other.
To get clean schema names in production while keeping developer isolation in dev, add this macro to your project:
Default Target and Naming Conventions
Establish team conventions early:
Convention | Recommendation |
|---|---|
Dev target name |
|
Dev dataset prefix |
|
Threads (dev) |
|
Threads (prod) |
|
CI dataset prefix |
|
In your dbt_project.yml, set the profile name to match what you configured in Paradime:
Configure Production (Bolt Scheduler)
Production runs are handled by Paradime's Bolt Scheduler — a dedicated execution environment with its own connection, credentials, and dataset targets.
Prod Dataset Strategy
Your production connection should target clean, user-facing dataset names:
Field | Value |
|---|---|
Profile Name |
|
Target |
|
Dataset |
|
Dataset Location |
|
Service Account | A separate prod service account JSON |
Threads |
|
To set this up in Paradime:
Go to Settings → Connections
Click "Add New" next to Bolt Schedules
Select "BigQuery"
Upload your production service account JSON and fill in the fields
🔒 Best practice: Use a different service account for production than for development. The prod service account should have
Data Editoron prod datasets only, and the dev service account should haveData Editoron dev datasets only. Neither should have access to the other's datasets.
Job-Level Variables and Protection Against Writing to Prod from Dev
Bolt supports environment variables that you can set globally or override per-schedule:
Go to Settings → Workspaces → Environment Variables
In the Bolt Schedules section, click Add New
Add variables like:
Key | Value |
|---|---|
|
|
|
|
These variables can be referenced in your Bolt schedule commands and dbt_project.yml via {{ env_var('PROD_DATASET') }}.
Figure 3: Environment variables in Bolt ensure prod commands only write to production datasets. Dev connections have no access to prod datasets.
Additional safeguards:
The
generate_schema_namemacro (shown earlier) strips the developer prefix in prod, so models land in clean dataset names likestagingandmartsBolt's "Defer to Production" feature lets dev runs reference production tables for upstream dependencies without materializing into prod
Schedule-level environment variable overrides let you run the same dbt™ project with different dataset targets per schedule
Paradime Exclusive: Validate with SQL Scratchpad
Before you run dbt run for the first time, you should verify that your connection, permissions, and location settings are all correct. Paradime's SQL Scratchpad makes this trivially easy.
Run a Test Query in the Correct Location
In the Code IDE, click the "New File" button (top-right)
Paradime creates a file in the
paradime_scratchfolder (e.g.,scratch-1)Write a simple validation query:
Click Run (or use the Data Explorer preview) to see instant results
If this query returns your dataset with the correct location, your connection is working.
Confirm Dataset Write Permissions
Next, verify that your service account can actually create objects in the target dataset:
If either statement fails with accessDenied, your service account is missing the BigQuery Data Editor role on that dataset.
💡 Why this matters: Scratchpad files are automatically gitignored, so these validation queries never pollute your dbt™ project repository. They persist across sessions for reference but stay out of version control.
First Run Checklist
You've configured connections, set up IAM, and validated permissions. Time to run dbt™.
dbt deps and Package Resolution
If your project uses packages (from dbt Hub or Git), run dbt deps first to install them:
Your packages.yml might look like:
In Paradime's Code IDE, open the Terminal panel and run dbt deps. You should see output confirming each package was installed successfully.
dbt run + dbt test Expectations in BigQuery
Now, execute your first build:
What to expect on a successful first run:
Command | Expected Output |
|---|---|
| Models materialize in your dev dataset (e.g., |
| Schema tests and data tests pass (or fail with clear error messages) |
Verify in the BigQuery Console that tables/views were created in the correct dataset:
Figure 4: Sequence of a successful first dbt™ run in Paradime — from package installation through model materialization and testing.
Troubleshooting
Even with a clean setup, you may hit issues. Here are the three most common problems and their fixes.
Access Denied on Dataset
Error: Access Denied: Dataset my-gcp-project:analytics: User does not have permission to query table/dataset.
Cause: The service account is missing the BigQuery Data Editor role on the target dataset, or BigQuery Job User at the project level.
Fix:
Verify the service account email in your Paradime connection settings
Check IAM permissions:
Grant the missing role:
For dataset-level grants, use the BigQuery Console: BigQuery → Explorer → Select dataset → Sharing → Permissions → Add Principal
Location Mismatch Errors
Error: Not found: Dataset my-gcp-project:dbt_john was not found in location EU (when your dataset is actually in US).
Cause: The Dataset Location field in your Paradime connection doesn't match the actual BigQuery dataset location. Or you're trying to query across regions (e.g., joining a US dataset with an EU dataset).
Fix:
Check your dataset's actual location:
Update the Dataset Location in Paradime's connection settings to match (e.g., change from
EUtoUS)Ensure all datasets referenced in your dbt™ project (sources, seeds, models) are in the same location
⚠️ Common gotcha: BigQuery dataset location is immutable — you can't change it after creation. If there's a mismatch, you'll need to recreate the dataset in the correct region.
Service Account Key Rotation Best Practices
Service account JSON keys should be rotated regularly. Google recommends rotating keys every 90 days or fewer.
Rotation process:
Figure 5: Safe service account key rotation workflow — create, deploy, test, disable, monitor, delete.
Step-by-step:
Create a new key for the existing service account:
Upload the new key in Paradime: Settings → Connections → Edit → Upload new Service Account JSON
Validate using the SQL Scratchpad (run a simple query to confirm the new key works)
Disable the old key in the GCP Console IAM (don't delete it immediately)
Monitor for 24–48 hours to confirm nothing breaks
Delete the old key once you're confident the new one is fully operational
Additional best practices from Google Cloud documentation:
Never commit service account keys to source code repositories
Don't pass keys between users — upload directly to Paradime
Consider using BigQuery OAuth for development connections to eliminate dev-side keys entirely
Use
gcloud asset search-all-resourcesto audit keys older than 90 days:
Wrapping Up
A secure, production-ready dbt™ BigQuery setup isn't just about getting a dbt run to succeed — it's about building a foundation that scales safely. Here's what you've accomplished by following this guide:
✅ Least-privilege IAM: Service accounts with only BigQuery Job User + Data Editor, scoped to specific datasets✅ Dev/prod separation: Isolated connections via Paradime's Code IDE and Bolt Scheduler✅ Per-developer datasets: No collisions, no accidental production writes✅ Quick validation: SQL Scratchpad to test permissions and location before your first run✅ Key rotation plan: A repeatable process for rotating service account credentials
With Paradime handling the connection management, credential storage, and environment separation, your team can focus on what actually matters: building reliable data models.
Ready to get started? Sign up for a free Paradime trial and connect your BigQuery project in under 10 minutes.


