Software-defined Asset | Dagster Glossary

Back to Glossary Index

Software-defined Asset

A declarative design pattern that represents a data asset through code.

Definition of Software-defined Asset:

A software-defined asset (SDA) is a declarative design pattern that represents a data asset through code. An asset is an object in persistent storage that captures some understanding of the world, and it can be any type of object, such as a database table or view, a file in local storage or cloud storage like Amazon S3, or a machine learning model.

Software-defined assets are defined by writing code that describes the asset you want to exist, its upstream dependencies, and a function that can be run to compute the contents of the asset. This approach allows you to focus on the assets themselves—the end products of your data engineering efforts—rather than the execution of tasks.

Here are some key points about Software-defined Assets:

  1. Declarative Nature: SDAs allow developers to declare what the end state of an asset should be, and the orchestrator takes care of the execution logic required to achieve that state. This shifts the focus from task execution to asset production.

  2. Observability and Scheduling: SDAs provide enhanced observability into your data assets and allow for advanced scheduling. This makes it easier to understand the state of your assets and when they should be updated.

  3. Environment Agnosticism: SDAs are designed to be environment-agnostic, meaning that the same asset definitions can be used across different environments, such as development and production, without changes to the asset code.

  4. Data Lineage: SDAs have clear data lineage, which makes it easier to understand how data flows through your system and to debug issues when they arise.

  5. Integration with External Tools: SDAs can be integrated with external tools like dbt, allowing the orchestrator to track the lineage of every individual table created by these tools.

  6. Rich Metadata and Grouping: SDAs support rich metadata and grouping tools, which are useful for organizing and searching assets within large and complex organizations.

  7. Partitioning and Backfills: SDAs support time partitioning and backfills out of the box, which is useful for managing historical data and ensuring data consistency.

In summary, Software-defined Assets represent a novel declarative, code-based approach to defining and managing data assets, providing clear lineage, observability, and the ability to work seamlessly across different environments.

An example of a Software-defined Asset in Python

Using the Dagster framework for data orchestration, here is an example of defining an asset in code:

This asset is generated by reading data from a CSV file, processing it, and then writing the processed data to a new CSV file (the Asset).

In this example, we will create a simple pipeline with three assets:

  1. raw_data: Reads data from an input CSV file named input_data.csv into a Pandas DataFrame.
  2. processed_data: Takes the raw data DataFrame as input, performs some processing (in this case, doubling the values of an existing column), and returns the processed DataFrame.
  3. write_processed_data: Takes the processed DataFrame and writes it to a new CSV file named processed_data.csv.

Each asset is defined with the @asset decorator, and the dependencies between assets are implicitly defined by the function arguments. For example, processed_data depends on raw_data because it takes raw_data as an argument.

The assets are rendered automatically in the Dagster UI along with upstream and downstream dependencies.

Note that all three assets will be persisted in memory, and will be observable, meaning we will capture metadata on each materialization of the asset.

import pandas as pd
from dagster import asset

# Asset to read data from a CSV file
@asset
def raw_data():
    df = pd.read_csv('input_data.csv')
    return df

# Asset to process the raw data
@asset
def processed_data(raw_data):
    # Imagine some processing logic here, for example:
    # - Cleaning the data
    # - Filtering rows
    # - Transforming columns
    # For simplicity, we'll just add a new column with transformed data
    processed_df = raw_data.copy()
    processed_df['new_column'] = raw_data['existing_column'] * 2
    return processed_df

# Asset to write the processed data to a new CSV file
@asset
def write_processed_data(processed_data):
    processed_data.to_csv('processed_data.csv', index=False)

To make these assets discoverable by Dagster, you would make sure they are imported in your repository defs:

# __init__.py
from dagster import Definitions, load_assets_from_modules

from . import assets

all_assets = load_assets_from_modules([assets])

defs = Definitions(
    assets=all_assets,
)

This repository now contains a sequence of Software-defined Assets that can be materialized to perform a data processing workflow. When you run this in Dagster, the system will automatically resolve the dependencies and execute the assets in the correct order, in an optimized fashion.


Other data engineering terms related to
Data Management:
Dagster Glossary code icon

Append

Adding or attaching new records or data items to the end of an existing dataset, database table, file, or list.
An image representing the data engineering concept of 'Append'

Archive

Move rarely accessed data to a low-cost, long-term storage solution to reduce costs. Store data for long-term retention and compliance.
An image representing the data engineering concept of 'Archive'
Dagster Glossary code icon

Augment

Add new data or information to an existing dataset to enhance its value.
An image representing the data engineering concept of 'Augment'

Auto-materialize

The automatic execution of computations and the persistence of their results.
An image representing the data engineering concept of 'Auto-materialize'

Backup

Create a copy of data to protect against loss or corruption.
An image representing the data engineering concept of 'Backup'
Dagster Glossary code icon

Batch Processing

Process large volumes of data all at once in a single operation or batch.
An image representing the data engineering concept of 'Batch Processing'
Dagster Glossary code icon

Cache

Store expensive computation results so they can be reused, not recomputed.
An image representing the data engineering concept of 'Cache'
Dagster Glossary code icon

Categorize

Organizing and classifying data into different categories, groups, or segments.
An image representing the data engineering concept of 'Categorize'
Dagster Glossary code icon

Deduplicate

Identify and remove duplicate records or entries to improve data quality.
An image representing the data engineering concept of 'Deduplicate'

Deserialize

Deserialization is essentially the reverse process of serialization. See: 'Serialize'.
An image representing the data engineering concept of 'Deserialize'
Dagster Glossary code icon

Dimensionality

Analyzing the number of features or attributes in the data to improve performance.
An image representing the data engineering concept of 'Dimensionality'
Dagster Glossary code icon

Encapsulate

The bundling of data with the methods that operate on that data.
An image representing the data engineering concept of 'Encapsulate'
Dagster Glossary code icon

Enrich

Enhance data with additional information from external sources.
An image representing the data engineering concept of 'Enrich'

Export

Extract data from a system for use in another system or application.
An image representing the data engineering concept of 'Export'
Dagster Glossary code icon

Graph Theory

A powerful tool to model and understand intricate relationships within our data systems.
An image representing the data engineering concept of 'Graph Theory'
Dagster Glossary code icon

Idempotent

An operation that produces the same result each time it is performed.
An image representing the data engineering concept of 'Idempotent'
Dagster Glossary code icon

Index

Create an optimized data structure for fast search and retrieval.
An image representing the data engineering concept of 'Index'
Dagster Glossary code icon

Integrate

Combine data from different sources to create a unified view for analysis or reporting.
An image representing the data engineering concept of 'Integrate'
Dagster Glossary code icon

Lineage

Understand how data moves through a pipeline, including its origin, transformations, dependencies, and ultimate consumption.
An image representing the data engineering concept of 'Lineage'
Dagster Glossary code icon

Linearizability

Ensure that each individual operation on a distributed system appear to occur instantaneously.
An image representing the data engineering concept of 'Linearizability'
Dagster Glossary code icon

Materialize

Executing a computation and persisting the results into storage.
An image representing the data engineering concept of 'Materialize'
Dagster Glossary code icon

Memoize

Store the results of expensive function calls and reusing them when the same inputs occur again.
An image representing the data engineering concept of 'Memoize'
Dagster Glossary code icon

Merge

Combine data from multiple datasets into a single dataset.
An image representing the data engineering concept of 'Merge'
Dagster Glossary code icon

Model

Create a conceptual representation of data objects.
An image representing the data engineering concept of 'Model'

Monitor

Track data processing metrics and system health to ensure high availability and performance.
An image representing the data engineering concept of 'Monitor'
Dagster Glossary code icon

Named Entity Recognition

Locate and classify named entities in text into pre-defined categories.
An image representing the data engineering concept of 'Named Entity Recognition'
Dagster Glossary code icon

Parse

Interpret and convert data from one format to another.
Dagster Glossary code icon

Partition

Data partitioning is a technique that data engineers and ML engineers use to divide data into smaller subsets for improved performance.
An image representing the data engineering concept of 'Partition'
Dagster Glossary code icon

Prep

Transform your data so it is fit-for-purpose.
An image representing the data engineering concept of 'Prep'
Dagster Glossary code icon

Preprocess

Transform raw data before data analysis or machine learning modeling.
Dagster Glossary code icon

Replicate

Create a copy of data for redundancy or distributed processing.

Scaling

Increasing the capacity or performance of a system to handle more data or traffic.
Dagster Glossary code icon

Schema Inference

Automatically identify the structure of a dataset.
An image representing the data engineering concept of 'Schema Inference'
Dagster Glossary code icon

Schema Mapping

Translate data from one schema or structure to another to facilitate data integration.
Dagster Glossary code icon

Secondary Index

Improve the efficiency of data retrieval in a database or storage system.
An image representing the data engineering concept of 'Secondary Index'

Synchronize

Ensure that data in different systems or databases are in sync and up-to-date.
Dagster Glossary code icon

Validate

Check data for completeness, accuracy, and consistency.
An image representing the data engineering concept of 'Validate'
Dagster Glossary code icon

Version

Maintain a history of changes to data for auditing and tracking purposes.
An image representing the data engineering concept of 'Version'