Blog
Dagster 1.12: Monster Mash

Dagster 1.12: Monster Mash

October 30, 2025
Dagster 1.12: Monster Mash
Dagster 1.12: Monster Mash

A refined Dagster experience. Faster navigation, GA Components, plug-and-play deployment, improved orchestration with FreshnessPolicies, and a new Support Center for builders at scale.

Highlights

Dagster 1.12 continues our mission to make data orchestration faster, simpler, and more reliable from a redesigned UI to powerful new Components and deployment shortcuts.

  • A refreshed UI with a cleaner layout and collapsible sidebar
  • Components are now GA, with new integrations and state-backed capabilities
  • Simplified deployment
  • workflows with dg scaffold for Docker + GitHub Actions
  • FreshnessPolicies GA, replacing the legacy freshness API
  • Quality-of-life upgrades for backfills, partitions, and executor behavior
  • A brand-new Support Center and reorganized docs with dozens of new examples

Let’s dig in.

A Refreshed, Streamlined UI

We’ve redesigned Dagster’s UI to make it faster, cleaner, and easier to navigate. The top navigation has been reimagined as a collapsible sidebar, freeing up visual space and bringing your most-used workflows, like runs, assets, and deployments, front and center.

The result is a more cohesive and intuitive experience across Dagster+ Hybrid and Serverless, designed to help you find what you need at a glance and stay focused on building.

Components GA

After extensive testing and community feedback, the Components framework and the dg CLI are now Generally Available and fully supported across the Dagster platform.

This milestone cements Components as the default foundation for new Dagster projects, providing a unified, composable way to define and manage integrations, resources, and orchestration logic across your data stack.

Standardized Integrations

We’ve standardized the interface for integration components, making them easier to customize and extend. Each component now exposes two consistent methods, execute() and get_asset_spec(), which can be subclassed or overridden for advanced customization.

This unified interface means integrations behave predictably across the platform, making it simpler to tailor components to your environment without reinventing the wheel.

Here’s an example of how you might extend a component with custom logic:

from collections.abc import Iterator
from dagster_sling import (
    SlingReplicationCollectionComponent,
    SlingReplicationSpecModel,
    SlingResource,
)
import dagster as dg

class CustomSlingReplicationComponent(SlingReplicationCollectionComponent):
    def execute(
        self,
        context: dg.AssetExecutionContext,
        sling: SlingResource,
        replication_spec_model: SlingReplicationSpecModel,
    ) -> Iterator:
        # Add custom logging before execution
        context.log.info("Custom")
        # Call the sling replicate with debug mode enabled
        return sling.replicate(context=context, debug=True)

More Components, More Power

We’ve added a new batch of integration components to simplify setup with common analytics and data tools:

These join the growing collection of out-of-the-box Components built to make your pipelines interoperable by default, reducing setup friction and accelerating integration across your data ecosystem.

State-Backed Components

A big step forward for reliability and maintainability, the new StateBackedComponent base class enables components to persist and manage state separately from YAML / Python configuration. This is especially valuable for integrations that fetch or synchronize external data.

State can now be stored locally, in versioned storage, or via code-server snapshots. Many popular integrations like: 

# defs.yaml
type: dagster_fivetran.FivetranAccountComponent
attributes:
  ...
  defs_state:
    management_type: LOCAL_FILESYSTEM

Simplified Deployment

You shouldn’t have to wrestle with CI/CD just to get a Dagster project online. Two new dg scaffold commands make deployment practically plug-and-play:

  • dg scaffold build-artifacts Generates Docker and configuration files to build and deploy your project to Dagster Cloud, complete with support for multiple container registries (ECR, Docker Hub, GHCR, ACR, GCR).
  • dg scaffold github-actions Spins up a complete GitHub Actions workflow for CI/CD deployment to Dagster Cloud. It auto-detects Serverless vs Hybrid agents and walks you through setting up required secrets.

These commands make it easy to bootstrap production-ready pipelines and keep them running smoothly.

Core Orchestration Upgrades

This update brings FreshnessPolicies to General Availability and adds new orchestration features designed for reliability and scalability.

Originally introduced in 1.10, the new FreshnessPolicy API is now stable and replaces the old LegacyFreshnessPolicy.

  • FreshnessPolicy is now exported directly from dagster.
  • The FreshnessDaemon runs by default, no dagster.yaml switch needed.
  • Old build_*_freshness_checks methods are marked “superceded,” but remain functional for backward compatibility.

Use FreshnessPolicy for all new use cases to get better visibility and control over asset staleness.

Configurable Backfills

You can now provide run config when launching a backfill, letting you define consistent settings across all runs, perfect for parameterized replays or targeted batch updates.

Time-Based Partition Exclusions

TimeWindowPartitionsDefinition

now supports an exclusions parameter. This lets you skip weekends, holidays, or maintenance windows, specified as cron strings or datetime objects,  for fine-grained control over scheduling.

import dagster as dg
from datetime import datetime

daily_partitions = dg.DailyPartitionsDefinition(
    start_date="2022-03-12",
    exclusions=["0 0 * * 0", "0 0 * * 6"]  # Exclude Sundays and Saturdays
)

Execution Dependency Options

All Executors now accept a step_dependency_config with require_upstream_step_success.

Set it to False to allow downstream steps to start as soon as their required upstream outputs are ready, even if other outputs from that step are still running.

This is a huge win for multi-asset parallelism and complex dependency graphs.

Support & Docs Improvements

We’ve launched a brand-new Support Center with guides and troubleshooting content for both Dagster+ Hybrid and Serverless. You’ll find curated answers from our support team, now fully self-serve.

In the docs, we’ve gone deep on reorganization and new content:

  • New guides: troubleshooting hybrid deployments, diagnosing slow code with py-spy, and resolving sensor timeout issues.
  • Examples overhaul: the Examples section has easier navigation including quick links to Dagster University, our internal Dagster platform, and customer Deep Dives. As well as new examples for DSpy, PyTorch, and advanced mini-examples covering dynamic fanout, caching, parallelism, and code-sharing.

Acknowledgments

This release was made possible by feedback from our users and contributors, your bug reports, feature requests, and insights shape Dagster every day.

Have feedback or questions? Start a discussion in Slack or Github.

Interested in working with us? View our open roles.

Want more content like this? Follow us on LinkedIn.

Dagster Newsletter

Get updates delivered to your inbox

Latest writings

The latest news, technologies, and resources from our team.

Multi-Tenancy for Modern Data Platforms
Webinar

April 7, 2026

Multi-Tenancy for Modern Data Platforms

Learn the patterns, trade-offs, and production-tested strategies for building multi-tenant data platforms with Dagster.

Deep Dive: Building a Cross-Workspace Control Plane for Databricks
Webinar

March 24, 2026

Deep Dive: Building a Cross-Workspace Control Plane for Databricks

Learn how to build a cross-workspace control plane for Databricks using Dagster — connecting multiple workspaces, dbt, and Fivetran into a single observable asset graph with zero code changes to get started.

Dagster Running Dagster: How We Use Compass for AI Analytics
Webinar

February 17, 2026

Dagster Running Dagster: How We Use Compass for AI Analytics

In this Deep Dive, we're joined by Dagster Analytics Lead Anil Maharjan, who demonstrates how our internal team utilizes Compass to drive AI-driven analysis throughout the company.

DataOps with Dagster: A Practical Guide to Building a Reliable Data Platform
DataOps with Dagster: A Practical Guide to Building a Reliable Data Platform
Blog

March 17, 2026

DataOps with Dagster: A Practical Guide to Building a Reliable Data Platform

DataOps is about building a system that provides visibility into what's happening and control over how it behaves

Unlocking the Full Value of Your Databricks
Unlocking the Full Value of Your Databricks
Blog

March 12, 2026

Unlocking the Full Value of Your Databricks

Standardizing on Databricks is a smart strategic move, but consolidation alone does not create a working operating model across teams, tools, and downstream systems. By pairing Databricks and Unity Catalog with Dagster, enterprises can add the coordination layer needed for dependency visibility, end-to-end lineage, and faster, more confident delivery at scale.

Announcing AI Driven Data Engineering
Announcing AI Driven Data Engineering
Blog

March 5, 2026

Announcing AI Driven Data Engineering

AI coding agents are changing how data engineers work. This Dagster University course shows how to build a production-ready ELT pipeline from prompts while learning practical patterns for reliable AI-assisted development.

How Magenta Telekom Built the Unsinkable Data Platform
Case study

February 25, 2026

How Magenta Telekom Built the Unsinkable Data Platform

Magenta Telekom rebuilt its data infrastructure from the ground up with Dagster, cutting developer onboarding from months to a single day and eliminating the shadow IT and manual workflows that had long slowed the business down.

Scaling FinTech: How smava achieved zero downtime with Dagster
Case study

November 25, 2025

Scaling FinTech: How smava achieved zero downtime with Dagster

smava achieved zero downtime and automated the generation of over 1,000 dbt models by migrating to Dagster's, eliminating maintenance overhead and reducing developer onboarding from weeks to 15 minutes.

Zero Incidents, Maximum Velocity: How HIVED achieved 99.9% pipeline reliability with Dagster
Case study

November 18, 2025

Zero Incidents, Maximum Velocity: How HIVED achieved 99.9% pipeline reliability with Dagster

UK logistics company HIVED achieved 99.9% pipeline reliability with zero data incidents over three years by replacing cron-based workflows with Dagster's unified orchestration platform.