Blog
Dagster University Presents: Testing with Dagster

Dagster University Presents: Testing with Dagster

March 31, 2025
Dagster University Presents: Testing with Dagster
Dagster University Presents: Testing with Dagster

Learn best practices for writing Pythonic tests for Dagster.

We are happy to announce a new addition to Dagster University with our new course: Testing with Dagster!

Jump to the Testing with Dagster course page ↗

Learn

Testing is often overlooked in data engineering. However, the only way to properly scale a data platform is to move beyond constant maintenance and troubleshooting issues in production.

In order to build with confidence, you need assurances that your code works as expected before it ships. That means having tests in place to validate new features or ensure changes do not have unintended consequences.

Testing with Dagster is a six-lesson course, each focused on a different aspect of testing. If you've never written tests before, this course provides a structured introduction to test design and an overview of testing in Python. If you're an experienced Python and Dagster user, you’ll find best practices and techniques to streamline your testing suite.

Testing in Dagster

At Dagster, we believe strongly in the power of testing. The only way we can release a new version of Dagster every week is by ensuring everything works as we develop. We want our users to have that same level of confidence in the code they build.

This module covers:

  • The fundamentals of unit testing and writing asset tests in Dagster.
  • Strategies for handling external dependencies in your Dagster deployment while maintaining full control in a testing environment.
  • Best practices for integration testing to ensure your tests mirror real-world production scenarios.
  • Proven Dagster-specific testing tips to help you maintain and optimize your project.

Example: Mocking API calls

@patch("requests.get")
def test_state_population_api_assets_config(mock_get, example_response, api_output):
    mock_response = Mock()
    mock_response.json.return_value = example_response
    mock_response.raise_for_status.return_value = None
    mock_get.return_value = mock_response

    result = dg.materialize(

        assets=[
            lesson_4.state_population_api_resource_config,
            lesson_4.total_population_resource_config,
        ],
        resources={"state_population_resource": lesson_4.StatePopulation()},
        run_config=dg.RunConfig(
            {"state_population_api_resource_config": lesson_4.StateConfig(name="ny")}
        ),
    )
    assert result.success

    assert result.output_for_node("state_population_api_resource_config") == api_output
    assert result.output_for_node("total_population_resource_config") == 9082539

Enroll Today

Like all Dagster University courses, Testing with Dagster is free and available to everyone. Simply sign up at Dagster University to get started. Once enrolled, you can track your progress and learn at your own pace.

Jump to the Testing with Dagster course page ↗

Have feedback or questions? Start a discussion in Slack or Github.

Interested in working with us? View our open roles.

Want more content like this? Follow us on LinkedIn.

Dagster Newsletter

Get updates delivered to your inbox

Latest writings

The latest news, technologies, and resources from our team.

Your GTM Data, Finally Untangled
Your GTM Data, Finally Untangled

January 15, 2026

Your GTM Data, Finally Untangled

Compass now connects directly to your go-to-market tools, letting you ask questions about pipeline, ad spend, and sales conversations in Slack without exporting CSVs or waiting on the data team.

Dignified Python: 10 Rules to Improve your LLM Agents
Dignified Python: 10 Rules to Improve your LLM Agents

January 9, 2026

Dignified Python: 10 Rules to Improve your LLM Agents

Modern LLMs generate patterns, not principles. Dignified Python gives agents the intent they lack, ensuring code is explicit, consistent, and engineered with care. Here are ten rules from our Claude prompt.

Evaluating Model Behavior Through Chess
Evaluating Model Behavior Through Chess

January 7, 2026

Evaluating Model Behavior Through Chess

Benchmarks measure outcomes, not behavior. By letting AI models play chess in repeatable tournaments, we can observe how they handle risk, repetition, and long-term objectives, revealing patterns that static evals hide.