Learn
dbt Build vs Run

dbt Build vs Run: 6 Key Differences & Critical Best Practices

The dbt build and dbt run commands serve different purposes in a dbt project, although dbt build encompasses the functionality of dbt run and more.

Understanding the dbt build and dbt run Commands 

The dbt build and dbt run commands serve different purposes in a dbt project, although dbt build encompasses the functionality of dbt run and more.

dbt run executes the data models by running the SQL queries defined in your project. It focuses solely on materializing your models (e.g., creating or updating tables/views in your data warehouse), which is useful during development when you are iterating on model logic and don't necessarily need to run tests with every change.

The dbt run command transforms raw data into the tables or views specified in a dbt project. It compiles and executes the SQL for all selected models, materializing them according to their configurations. It is useful for iterative model development and rapid testing of transformation logic without the overhead of running validations.

dbt build is a more comprehensive command that performs a sequence of actions: 

  • Runs models: Materializes your dbt models, similar to dbt run. 
  • Runs tests: Executes the data tests defined in your project to ensure data quality. If a test on an upstream resource fails, downstream resources that depend on it will be skipped. 
  • Runs snapshots: Updates your dbt snapshots, which capture changes to source data over time. 
  • Runs seeds: Loads seed data (CSV files) into your data warehouse. 

The build command is designed for production environments or when you need a holistic validation of your dbt project. It ensures data quality by incorporating testing as part of the build process, preventing downstream issues that could arise from faulty upstream data. 

Summarizing the key differences:

  • Scope: dbt run is limited to model materialization, while dbt build includes models, tests, snapshots, and seeds. 
  • Customizability: dbt run applies selections only to models, while dbt build applies them across models, tests, seeds, snapshots, and freshness checks.
  • Execution: dbt run executes only model SQL, while dbt build runs a full pipeline including models, tests, seeds, and snapshots.
  • Data quality assurance: dbt build prioritizes data quality by running tests before materializing dependent models, while dbt run does not inherently include testing. 
  • Supported flags and filtering: dbt run flags are straightforward since they affect only models, while dbt build flags require more care because they impact multiple resource types and dependencies.
  • Use cases: dbt run is often preferred for rapid iteration during development, while dbt build is recommended for production deployments and comprehensive project validation.

dbt Build vs. dbt Run: Key Differences in Depth 

1. Usage

The dbt run command has a simple syntax:

dbt run [flags]

By default, it runs all models in the project. You can use flags such as --select or --exclude to target specific models. For example:

dbt run --select "tag:daily”

This runs only the models with the daily tag. The command is often executed locally during development to test model logic without additional steps like testing or snapshotting.

The dbt build command has a broader scope but uses a similar syntax:

dbt build [flags]

It runs models, tests, snapshots, and seeds. For instance:

dbt build --select "tag:daily”

This builds all modes, snapshots, and seeds with the daily tag, as well as their associated tests. In production pipelines, dbt build is often preferred because it ensures both data transformation and validation are performed using a single command.

2. Scope

The dbt run command has a narrow scope: it only builds models by compiling and running the SQL in your project. It does not handle any testing, snapshotting, or data validation tasks. This makes it suitable for workflows focused solely on transformation logic, such as local development or debugging individual models.

dbt build offers a broader scope. It executes models like dbt run but also includes schema and data tests, runs snapshots, and refreshes seed tables.. This makes dbt build an all-in-one command that covers both transformation and quality assurance. It's useful in production deployments where full validation of the data pipeline is required.

3. Supported Flags and Filtering

Both commands support similar filtering mechanisms using --select, --exclude, --selector, and --defer flags, which define what part of the project to execute. In dbt run, these flags affect only models, which simplifies their usage and interpretation.

With dbt build, the same flags control a broader set of actions, as the command includes tests, snapshots, and seeds. For example, selecting a model with --select can lead to tests and snapshots being executed as part of the dependency graph. This requires more careful planning when using filtering options to avoid running unintended tasks.

In addition, dbt build is sensitive to model dependencies and test failures, so filters can have cascading effects. A selection that includes a parent model may also trigger builds and tests on child models or associated tests. Alternatively, failing tests on upstream models may cause downstream models to be skipped. Understanding how the DAG (directed acyclic graph) is structured is critical when using dbt build to fine-tune execution behavior.

4. Execution

When you execute dbt run, the command compiles the SQL files for selected models and runs them against your target database. Each model is materialized based on its configuration: view, table, incremental, or ephemeral. There is no execution of tests, snapshots, or other validation routines. This leads to faster execution times and is particularly useful for iterative model development or targeted transformations.

dbt build runs a more complex dependency graph. It begins by building the selected models, seeds, and snapshots in DAG order, then automatically runs associated unit, schema and data tests. This full-stack execution process is slower but ensures that the entire pipeline meets defined data quality standards. It is optimized for completeness rather than speed.

5. Data Quality Assurance

dbt run performs no validation: no tests are executed. This means any model outputs are unchecked, which can be risky if used in automated or production environments without further validation.

dbt build integrates quality assurance into the build process. After models are run, dbt automatically triggers any unit tests, schema tests (like uniqueness or not null checks), and data tests (custom SQL-based assertions). This built-in validation helps catch data quality issues before downstream users or systems are affected, making dbt build far more robust for production workflows.

6. Use Cases

dbt run is best suited for:

  • Development and debugging tasks. When a developer is iterating on SQL logic, changing model configurations, or validating incremental build strategies, dbt run provides fast feedback by skipping testing and validation. 
  • Ad hoc tasks, such as refreshing a single model or running a small set of transformations without invoking the full pipeline.

dbt build is intended for production workflows where both data transformation and validation must occur together. For example:

  • Ensuring models are built, tested, and validated before being promoted for downstream use. 
  • Integrating dbt into CI/CD pipelines, scheduled production jobs, and environments where data quality checks are required to prevent invalid data from reaching analytics systems. 
  • Running end-to-end data pipelines that combine seeds, transformations, and snapshots.

Best Practices for Using dbt Build and dbt Run 

Follow these best practices to make effective use of the dbt build and dbt run commands in your projects. 

1. Command Choice & Scoping

Efficient use of dbt run and dbt build starts with selecting the right command and narrowing execution to the parts of the project that matter:

  • Pick the command by intent: Use dbt run for fast, local iteration on a few models. Use dbt build in CI/CD and scheduled jobs to run models + tests (+ snapshots) together.
  • Keep runs small with selection: Always scope work: --select model_a+, --exclude tag:slow, --selector <yaml_selector>. Avoid project-wide builds unless required.
  • Prefer state-aware execution: Use --state and selectors like state:modified+ to run only changed models and their dependents. This shortens CI and reduces warehouse cost.
  • Use tags to shape pipelines: Tag models (e.g., tag:core, tag:slow, tag:hourly) and target them in commands to create predictable, layered runs.

2. Environments & Safety

Separating environments and guarding against accidental misuse prevents issues that can disrupt production systems:

  • Separate dev from prod: Point dev runs to isolated schemas/warehouses via target profiles. Never develop against production datasets.
  • Use environment variables for secrets: Keep credentials and toggles out of project code. Drive behavior (e.g., RUN_SLOW_TESTS=false) via project and environment variables.
  • Add runtime guards: Implement row-count reasonableness checks and sentinel tests on critical tables to catch silent upstream issues.

3. Testing & Quality Controls

Testing should focus on signals that matter, ensuring that builds fail only for meaningful reasons and prevent low-quality data from moving forward:

  • Make tests actionable: Write minimal, high-signal tests (e.g., unique, not_null, key constraints, invariants). Avoid noisy tests that fail often without impact.
  • Fail fast in CI: Add --fail-fast to surface the first error quickly. Combine with --warn-error to treat warnings as failures when quality is critical.
  • Control freshness checks: Make sure to run dbt source freshness before dbt run or dbt build in production, since dbt build doesn’t check source freshness automatically.
  • Enforce contracts: Enable contracts on critical models and sources to catch column shape or type drift during dbt build.
  • Document and surface ownership: Populate meta/owner for models. Route test failures to on-call teams with clear ownership metadata.

4. Incremental Models, Snapshots & DAG Hygiene

Incremental models and snapshots require careful setup, and structuring runs in stages helps maintain a clean, reliable DAG:

  • Tame incremental models: Validate unique_key, on_schema_change, and filters in is_incremental() blocks. Use --full-refresh sparingly and never by default in production.
  • Snapshot deliberately: Snapshots are often expensive and designed to run at controlled intervals (e.g., daily). Using dbt build in local or CI environments may unnecessarily update snapshots, polluting them with redundant versions.
  • Use defer in PR CI: In lightweight CI, run dbt build --select state:modified+ --defer --state <prod_artifacts> to reference prod results instead of rebuilding the entire project.
  • Order work with buckets: Run in stages: seeds → sources/freshness → snapshots → core models → marts → tests on marts. This makes downstream failures easier to diagnose.

5. Performance, Cost & Reliability

Managing execution resources, monitoring artifacts, and handling unstable models reduces costs and makes pipelines more stable:

  • Tune concurrency: Set --threads to match warehouse slots/queues. Over-threading can lead to slow runs or hit resource limits.
  • Log and persist artifacts: Store manifest.json, run_results.json, and test results from every run. Use them for selectors, trend analysis, and flaky-test detection.
  • Handle flaky dependencies: Quarantine unstable models with a tag and exclude from scheduled dbt build until stabilized. This prevents pipeline-wide failures.
  • Review cost hotspots: Track model runtime and bytes scanned. Refactor or materialize high-cost models differently (e.g., table vs. view) based on usage.

Orchestrating dbt Data Pipelines with Dagster

Dagster is an open-source data orchestration platform with first-class support for dbt. As a general-purpose orchestrator, Dagster allows you to go beyond just SQL transformations and seamlessly connects your dbt project with your wider data platform.

It offers teams a unified control plane for not only dbt assets, but also ingestion, transformation, and AI workflows. With a Python-native approach, it unifies SQL, Python, and more into a single testable and observable platform.

Best of all, you don’t have to choose between Dagster and dbt Cloud™ — start by integrating Dagster with existing dbt projects to unlock better scheduling, lineage, and observability. Learn more by heading to the docs on Dagster’s integration with dbt and dbt Cloud.

Dagster Newsletter

Get updates delivered to your inbox

Latest writings

The latest news, technologies, and resources from our team.

When (and When Not) to Optimize Data Pipelines
When (and When Not) to Optimize Data Pipelines

November 17, 2025

When (and When Not) to Optimize Data Pipelines

Engineers often optimize the wrong parts of their pipelines, here's a profiling-first framework to identify real bottlenecks and avoid the premature optimization trap.

Your Data Team Shouldn't Be a Help Desk: Use Compass with Your Data
Your Data Team Shouldn't Be a Help Desk: Use Compass with Your Data

November 13, 2025

Your Data Team Shouldn't Be a Help Desk: Use Compass with Your Data

Compass now supports every major data warehouse. Connect your own data and get AI-powered answers directly in Slack, with your governance intact and your data staying exactly where it is.

Introducing Our New eBook: Scaling Data Teams
Introducing Our New eBook: Scaling Data Teams

November 5, 2025

Introducing Our New eBook: Scaling Data Teams

Learn how real data teams, from solo practitioners to enterprise-scale organizations, build in Dagster’s new eBook, Scaling Data Teams.