Back to Glossary Index

Dagster Data Engineering Glossary:


Data Transformation

Convert data from one format or structure to another.

Data Transformation definition:

Data transformation in the context of data engineering refers to the process of converting, reshaping, or manipulating raw data into a structured and usable format.

Data transformation involves applying various operations such as filtering, aggregation, cleaning, and normalization to ensure that the data is consistent, accurate, and ready for analysis. Data transformation plays a crucial role in preparing data for storage, integration, and analysis, enabling organizations to derive valuable insights and make informed decisions from their data assets.

This said, data can be transformed at many different steps in a data pipeline. A key to success is to be able to track and observe any data transformation steps introduced in the process, and build that into the data lineage.

Data transformation approaches:

Transforming data on the fly (i.e., during the extraction or loading stage) can be more efficient and faster, as it avoids the need for a separate transformation layer. However, it can also be more complex and harder to maintain, especially as the volume and complexity of the data increase.

Using a transformation layer like dbt can provide several benefits. For example, it can simplify and centralize the transformation logic, making it easier to maintain, test, and audit. It can also provide features like version control, documentation, and collaboration tools. Additionally, dbt can help enforce data quality and consistency by providing automated data validation and testing.

In general, using a transformation layer like dbt can be a best practice for data processing pipelines, especially in larger and more complex environments. However, it's important to carefully evaluate the specific needs and trade-offs of your project to determine the best approach.

There are many Python functions that can be used for data transformation in data engineering. Some of the most frequently used ones include:

  1. map() and filter(): These functions can be used to transform and filter elements in a list or other iterable. For example, map() can be used to apply a function to each element in a list, while filter() can be used to remove elements from a list that do not meet a certain condition.
  2. apply(): This function is part of the Pandas library and can be used to apply a function to each row or column of a DataFrame.
  3. join() and merge(): These functions can be used to combine data from multiple sources. join() is used to combine data based on a common index, while merge() can be used to combine data based on common columns.
  4. groupby(): This function is also part of the Pandas library and can be used to group data by one or more columns and perform operations on each group.
  5. agg(): This function is used with groupby() and can be used to apply multiple aggregation functions to each group.
  6. pivot_table(): This function is also part of the Pandas library and can be used to create a pivot table from a DataFrame.
  7. split() and join(): These functions can be used to split a string into a list and join a list into a string, respectively.
  8. datetime() and timedelta(): These functions can be used to work with dates and times in Python, such as converting a string to a datetime object or calculating the difference between two dates.
  9. json.loads() and json.dumps(): These functions can be used to convert a JSON string to a Python object and vice versa.
  10. numpy.reshape(): This function is part of the NumPy library and can be used to reshape an array into a new shape.

Other data engineering terms related to
Data Transformation:
Dagster Glossary code icon

Align

Aligning data can mean one of three things: aligning datasets, meeting business rules, or arranging data elements in memory.
An image representing the data engineering concept of 'Align'
Dagster Glossary code icon

Clean or Cleanse

Remove invalid or inconsistent data values, such as empty fields or outliers.
An image representing the data engineering concept of 'Clean or Cleanse'
Dagster Glossary code icon

Cluster

Group data points based on similarities or patterns to facilitate analysis and modeling.
An image representing the data engineering concept of 'Cluster'
Dagster Glossary code icon

Curate

Select, organize, and annotate data to make it more useful for analysis and modeling.
An image representing the data engineering concept of 'Curate'
Dagster Glossary code icon

Denoise

Remove noise or artifacts from data to improve its accuracy and quality.
An image representing the data engineering concept of 'Denoise'
Dagster Glossary code icon

Denormalize

Optimize data for faster read access by reducing the number of joins needed to retrieve related data.
An image representing the data engineering concept of 'Denormalize'
Dagster Glossary code icon

Derive

Extracting, transforming, and generating new data from existing datasets.
An image representing the data engineering concept of 'Derive'
Dagster Glossary code icon

Discretize

Transform continuous data into discrete categories or bins to simplify analysis.
An image representing the data engineering concept of 'Discretize'
Dagster Glossary code icon

ETL

Extract, transform, and load data between different systems.
An image representing the data engineering concept of 'ETL'
Dagster Glossary code icon

Encode

Convert categorical variables into numerical representations for ML algorithms.
An image representing the data engineering concept of 'Encode'
Dagster Glossary code icon

Filter

Extract a subset of data based on specific criteria or conditions.
An image representing the data engineering concept of 'Filter'
Dagster Glossary code icon

Fragment

Break data down into smaller chunks for storage and management purposes.
An image representing the data engineering concept of 'Fragment'
Dagster Glossary code icon

Homogenize

Make data uniform, consistent, and comparable.
An image representing the data engineering concept of 'Homogenize'
Dagster Glossary code icon

Impute

Fill in missing data values with estimated or imputed values to facilitate analysis.
An image representing the data engineering concept of 'Impute'
Dagster Glossary code icon

Linearize

Transforming the relationship between variables to make datasets approximately linear.
An image representing the data engineering concept of 'Linearize'

Munge

See 'wrangle'.
An image representing the data engineering concept of 'Munge'
Dagster Glossary code icon

Normalize

Standardize data values to facilitate comparison and analysis. Organize data into a consistent format.
Dagster Glossary code icon

Reduce

Convert a large set of data into a smaller, more manageable form without significant loss of information.
An image representing the data engineering concept of 'Reduce'
Dagster Glossary code icon

Reshape

Change the structure of data to better fit specific analysis or modeling requirements.
An image representing the data engineering concept of 'Reshape'
Dagster Glossary code icon

Serialize

Convert data into a linear format for efficient storage and processing.
An image representing the data engineering concept of 'Serialize'
Dagster Glossary code icon

Shred

Break down large datasets into smaller, more manageable pieces for easier processing and analysis.
Dagster Glossary code icon

Skew

An imbalance in the distribution or representation of data.
Dagster Glossary code icon

Split

Divide a dataset into training, validation, and testing sets for machine learning model training.
Dagster Glossary code icon

Standardize

Transform data to a common unit or format to facilitate comparison and analysis.
Dagster Glossary code icon

Tokenize

Convert data into tokens or smaller units to simplify analysis or processing.
An image representing the data engineering concept of 'Tokenize'
Dagster Glossary code icon

Wrangle

Convert unstructured data into a structured format.
An image representing the data engineering concept of 'Wrangle'