Building a trusted and productive data platform.
Build a high-performance data platform that delivers the assets all stakeholders can trust.
As data operations scale and teams have to manage a number of interdependent data pipelines, it becomes harder to deliver trusted data to the organization while maintaining high performance on the engineering team. Adopting a proper framework becomes essential for making the work repeatable, and delivering operational efficiency and observability.
In this fireside chat between Sandy Ryza, Lead Engineer on the Dagster Project, and Pete Hunt, CEO of Dagster Labs, we will discuss Dagster's main framework, built around the core concept of Data Assets. By employing such a framework, Data Engineering and ML teams can build repeatable processes while leveraging key capabilities like Continuous Integration, testing, and observability at scale.
This session will benefit heads of data engineering or ML looking to build data processes that truly scale, protecting the productivity of the core engineering team while delivering trusted and timely data to all stakeholders.