Dagster is the cloud-native orchestrator for the whole development lifecycle
Define assets in Code
Write regular Python functions and Dagster infers the DAG without any tedious work.Learn more
Develop pipelines locally
Don't fight against complex local dependencies or catch typos after long build pipelines. Run all of dagster locally with one pip install.
Integrate the entire data stack
Use your existing code for dbt, Airbyte, Fivetran, Snowflake, Bigquery and others.
Unit-test your data applications, separate business logic from environments, and set explicit expectations on uncontrollable inputs.
Deploy to production with confidence by first testing new features in automatically managed staging environments.
Sensor and schedule testing
Manually trigger a test evaluation of a sensor or schedule and view the results from the UI
Enterprise plans are configurable to include any number of Deployments based on your organization’s needs.
Go from pull request to production effortlessly with continuous code deployments.
Parameterize your data pipelines without modifying code or insecurely hard-coding database credentials.
Observe individual runs and query logs for detailed diagnostics, discover the most time-consuming tasks via a Gantt chart, re-execute subsets of a run, and more.
Track asset lineage
Get details on each asset: Freshness, status, schema, metadata, and dependencies displayed in one consolidated view.
Trigger Slack or email notifications on run failure/success, schedule/sensor tick failure, and more.
“We can deliver more insight, and we can deliver faster value. Because speed is what we care about.”
Head of Data Engineering at Group 1001
Collaborate with your team
We built role-based access control and component-level isolation into Dagster because security is never a one-size-fits-all solution.
Single pane of glass
Observe, optimize, and debug data pipeline across teams from a modern, intuitive UI.Read the docs
Dagster Cloud supports five user roles for several levels of role-based access control: Viewer, Launcher, Editor, Admin, and Organization Admin.Read the docs
Isolated team environments
Working across teams does not mean every team has to use the same Python environment. Dagster supports separate projects for isolation, while still enabling cross-project lineage.Read the docs
Security and compliance
SSO & authentication
Support for Google, GitHub, and SAML IdPs (Okta, Azure Active Directory).Read the docs
Track all activity and changes made to the system with a unified view of all user actions.Read the docs
SOC2 & HIPAA compliance
The development process at Elementl is SOC2 and HIPAA compliant.Learn more
Automate your pipelines
Dagster offers a few different ways of automating data pipelines, and choosing the right Dagster tool depends on your specific needs.
Execute runs at a fixed interval with fault-tolerant, cron-based schedules.Read the docs
Instigate runs based on some external state change with sensors.Read the docs
Automatically launch runs to materialize assets based on their freshness policy.Read the docs
“Dagster empowers my stakeholder teams to own their data assets end-to-end like no other orchestrator can.”
Staff Data Engineer at Dutchie
Two deployment options
Let us host the entire orchestration engine. Spin-up in Serverless is completely effortless. Just write a Python file and we do the rest: no Dockerfiles, no Kubernetes, no provisioning.
Free for 30 days
Bring your own compute
Bring your compute platform and let Dagster host the control plane, which does not see your code or data. Maximize flexibility and security while offloading the vast majority of operational burden. Egress-only requirements means no network headaches. Dagster handles patching, upgrades, and the service SLA. (architecture overview |mearn more)
Free for 30 days
Enterprises build on Dagster
Organizations across industries use Dagster Enterprise to run their data platforms.