BigQuery Global Queries: How to Run Cross-Region SQL in 2026
Learn how BigQuery global queries work, how to enable them, and what they mean for multi-region data architectures. Includes setup, pricing, limitations, and dbt™ implications.

Fabio Di Leta
Feb 18, 2026
·
6
min read
TL;DR
BigQuery global queries (currently in Preview) let you run a single SQL statement that references tables stored in different Google Cloud regions. BigQuery handles the cross-region data movement automatically, using temporary tables that live for 8 hours in the primary query region. You pay for compute in each region touched, plus data transfer and short-term storage costs.
The Problem: Data Lives Everywhere, Queries Don't
Modern data teams rarely store everything in one place. Regulatory requirements push customer data into specific regions. Acquisitions bring new datasets with legacy location constraints. Operational databases get replicated close to the systems that write to them — not necessarily where your analytics team sits.
The result is a familiar frustration: your European sales data is in europe-west1, your global product catalog is in us-central1, and joining them for a consolidated revenue report requires either a scheduled replication job, manual dataset copying, or accepting stale data from a pre-joined denormalized table.
BigQuery global queries are designed to eliminate that workaround layer.
What Are BigQuery Global Queries?
A BigQuery global query is a standard SQL query that can reference datasets stored in more than one Google Cloud region within the same statement. Instead of requiring all source tables to share a location, BigQuery automatically extracts the relevant data from each remote region, copies it to the primary query location, and executes the final computation there.
The feature is currently in public Preview and must be explicitly enabled per project and region.
How Global Queries Work: The Execution Model
Before routing production workloads through this feature, it's worth understanding exactly what happens when you fire off a global query.
The execution steps in order:
1. Location resolution — BigQuery determines the primary location, either from your explicit SET @@location declaration or automatically based on query type.
2. Remote subquery execution — For each region that holds data referenced by your query, BigQuery runs a subquery to extract only what's needed. This is where pushdowns apply: BigQuery tries to filter columns and rows at the remote region before transferring anything, so you're not moving entire tables across the wire.
3. Cross-region data transfer — Filtered data is copied from remote regions into temporary tables in the primary region.
4. Temporary storage — Copied data is held for up to 8 hours in the primary region. This is where short-term storage costs accrue.
5. Final query execution — BigQuery runs the complete query using all collected data in the primary region.
6. Results returned — Output is returned to the user or written to a destination table.
⚠️ Important: Pushdown optimizations depend on query structure. Queries that reference
STRUCTcolumns disable pushdowns entirely for the affected remote subqueries — which can significantly increase data transfer volume and cost.
A Realistic Example: Retail Analytics Across Regions
Imagine a retail business with the following setup:
us-central1: A product catalog table (products) with ~50,000 SKUs and associated cost priceseurope-west1: A transactions table (emea_sales) with millions of rows of European point-of-sale data
Previously, producing a margin analysis report required either replicating emea_sales to us-central1 or maintaining a denormalized table. With global queries, you can write:
In this case, setting the primary location to europe-west1 makes sense because emea_sales is much larger than products. BigQuery transfers the smaller products table (filtered to only sku, category, and cost_price) from us-central1 into a temporary table in europe-west1, then runs the final aggregation there.
💡 Cost tip: Always run the query from the region where the largest dataset lives. This minimizes the volume of data transferred across regions.
Enabling Global Queries
Global queries are disabled by default. There are two independent flags to configure:
Flag | Purpose |
|---|---|
| Allows a project to run global queries in a region |
| Allows a project to expose its data to global queries from other regions |
This split is intentional. It gives you independent control over where queries run and where data can flow from — which matters a lot for data residency compliance.
Project-level setup
Changes take a few minutes to propagate. For org-wide rollout, the same flags are available via ALTER ORGANIZATION SET OPTIONS.
Permissions
Running a global query requires the bigquery.jobs.createGlobalQuery IAM permission. The only predefined role that includes it out of the box is BigQuery Admin — which is almost certainly too broad for most use cases.
For a least-privilege setup, create a custom IAM role with just this permission and assign it to the relevant users or service accounts.
When BigQuery Chooses the Location For You
For certain query types, BigQuery ignores your SET @@location declaration and determines the primary location automatically:
DML statements (
INSERT,UPDATE,DELETE) always run in the target table's regionDDL statements (
CREATE TABLE AS SELECT) run in the region where the resource is being createdQueries with a destination table run in the region of the destination table
This means that if you're inserting the results of a cross-region query into a table in us-central1, you don't need to set the location explicitly — BigQuery will route it there regardless.
Pricing Breakdown
Global queries introduce cost components beyond standard BigQuery query billing:
Component | Basis |
|---|---|
Subquery compute (remote regions) | Your pricing model in each remote region |
Final query compute (primary region) | Your pricing model in the primary region |
Cross-region data transfer | |
Temporary storage (up to 8 hours) |
⚠️ One billing gotcha worth flagging explicitly: if data replication succeeds but the final query fails, you still pay for the replication. Global queries are not executed atomically. Always test on small data subsets before scaling up.
Limitations to Know Before Adopting
Global queries are in Preview, and the constraint list is meaningful:
10 tables per region maximum per query
No query result caching — every execution transfers data fresh
Higher latency than single-region queries
Pseudocolumns unsupported (e.g.
_PARTITIONTIME)Flexible column names unsupported
STRUCTcolumns disable pushdowns on affected remote subqueries — flatten structs in a remote view first to recover performanceAuthorized views and routines spanning regions are not supported
Materialized views over global queries are not supported
Not supported in sandbox mode or Assured Workloads
CMEK on remote temporary tables requires a globally-scoped KMS key; set a default global KMS key on the query-running project if CMEK coverage is required across all temporary data
RANGEandINTERVALliterals can't be used inWHEREclauses on BigLake table columns
What This Means for dbt™ Users
If your dbt™ project targets BigQuery, global queries could eventually simplify multi-region source architectures. Today, cross-region dbt™ setups typically rely on one of these patterns:
Pre-replication: a separate pipeline copies remote datasets into the primary region before dbt™ runs
Region-partitioned projects: separate dbt™ projects per region with separate orchestration
Denormalized sources: a snapshot job pre-joins cross-region data into a single table that dbt™ reads from
With global queries enabled at the BigQuery level, dbt™ models could reference sources across regions without an explicit replication step upstream. The SQL itself doesn't need to change — BigQuery handles the cross-region resolution transparently.
That said, the current 10-tables-per-region limit and the lack of materialized view support mean that complex dbt™ dependency graphs with many cross-region references may hit constraints quickly. It's also worth noting that temporary tables created by global queries are not visible to dbt™'s incremental logic, and any incremental models writing to a destination table will have their location automatically determined by that table's region.
Monitor the BigQuery release notes — once this feature reaches GA, it could meaningfully reduce pipeline complexity for teams managing data across geographic boundaries.
FAQ
What is a BigQuery global query?
A BigQuery global query is a SQL statement that references tables stored in more than one Google Cloud region. BigQuery automatically handles cross-region data movement, copying only the necessary rows and columns to a primary query location before executing the final computation.
How much do BigQuery global queries cost?
The cost includes compute charges in each region touched by the query, cross-region data transfer fees based on BigQuery's data replication pricing, and up to 8 hours of temporary storage in the primary region. You are charged for data transfer even if the final query fails after replication succeeds.
How do I enable global queries in BigQuery?
Use ALTER PROJECT SET OPTIONS to set enable_global_queries_execution = true in the region where queries will run, and enable_global_queries_data_access = true in each region whose data you want to expose. Changes take a few minutes to propagate.
Do global queries work with dbt™?
Global queries are transparently handled at the BigQuery layer, so dbt™ models don't require syntax changes. However, current limitations around the 10-tables-per-region cap and lack of materialized view support mean complex dbt™ graphs should be tested carefully before fully adopting the feature.
Are BigQuery global queries production-ready?
As of early 2026, global queries are in public Preview. They are available for testing and early adoption but come with Preview-stage caveats including limited support, no SLA guarantees, and an evolving limitations list.
What happens if a global query fails mid-execution?
Global queries are not atomic. If data replication from a remote region succeeds but the final query fails, you are still billed for the replication. Always test on small datasets before running large cross-region queries.





