Back to Glossary Index


Convert data into a linear format for efficient storage and processing.

Data serialization definition:

Data serialization is the process of converting complex data structures, such as objects or dictionaries, into a format that can be stored or transmitted, such as a byte stream or JSON string. This is useful in modern data pipelines for tasks such as saving data to disk, transmitting data across a network, or storing data in a database.

Data serialization example using Python:

Python provides several built-in serialization formats, including pickle, JSON, and YAML. Here's an example of using the pickle module to serialize a Python object:

import pickle

# Define an object to serialize
data = {
    'name': 'Dagster',
    'age': 4,
    'email': ''

# Serialize the object to a byte stream
serialized_data = pickle.dumps(data)

# Write the byte stream to a file
with open('data.pickle', 'wb') as f:

This code defines a dictionary data and then serializes it using the pickle module's `dumps()` method. The resulting byte stream is then written to a file named data.pickle. If you open the file you will see the data written out as:


To deserialize the data later, you can use the `loads()` method:

import pickle

# Read the byte stream from the file
with open('data.pickle', 'rb') as f:
    serialized_data =

# Deserialize the byte stream into a Python object
data = pickle.loads(serialized_data)

# Print the deserialized data

This code reads the serialized data from the file, deserializes it using pickle's loads() method, and then prints the resulting Python object.

Other data engineering terms related to
Data Transformation:


Aligning data can mean one of three things: aligning datasets, meeting business rules or arranging data elements in memory.

Big Data Processing

Process large volumes of data in parallel and distributed computing environments to improve performance.

Clean or Cleanse

Remove invalid or inconsistent data values, such as empty fields or outliers.


Group data points based on similarities or patterns to facilitate analysis and modeling.


Remove noise or artifacts from data to improve its accuracy and quality.


Optimize data for faster read access by reducing the number of joins needed to retrieve related data.


Transform continuous data into discrete categories or bins to simplify analysis.


Extract, transform, and load data between different systems.


Extract a subset of data based on specific criteria or conditions.


Convert data into a linear format for efficient storage and processing.


Fill in missing data values with estimated or imputed values to facilitate analysis.


See 'wrangle'.


Standardize data values to facilitate comparison and analysis. organize data into a consistent format.


Convert a large set of data into a smaller, more manageable form without significant loss of information.


Change the structure of data to better fit specific analysis or modeling requirements.


Break down large datasets into smaller, more manageable pieces for easier processing and analysis.


An imbalance in the distribution or representation of data.


Transform data to a common unit or format to facilitate comparison and analysis.


Convert data into tokens or smaller units to simplify analysis or processing.


Convert data from one format or structure to another.


Convert unstructured data into a structured format.