Back to Glossary Index

Big Data Processing

Process large volumes of data in parallel and distributed computing environments to improve performance.

In data engineering, "big data" refers to datasets that are so large and complex that they cannot be easily processed or analyzed using traditional data processing tools and techniques. The exact volume of data that qualifies as "big" can vary depending on the context, but in general, it is characterized by the three Vs: volume, velocity, and variety.

  • Volume refers to the amount of data that needs to be processed. Big data sets typically start at the terabyte (TB) scale and can go up to petabytes (PB) or even exabytes (EB) of data.
  • Velocity refers to the speed at which data is generated and needs to be processed. With the advent of real-time data streaming, big data systems are expected to handle data at high velocities and in near-real-time.
  • Variety refers to the diversity of data types and formats that need to be processed. Big data systems are expected to handle structured, semi-structured, and unstructured data from a variety of sources, including social media, sensors, and logs.

If you are new to data engineering, it is unlikely that you will need to worry about Big Data architectures. In fact, the rapid scaling up of computing power, storage and high speed networking means that data challenges that would have been considered “big data” a decade ago can now be comfortably managed on a local machine, as discussed in this article.

Finally, it is worth noting that the term "big data" can mean different things to different people. It may refer to any large amount of data, or it could refer specifically to datasets that are too large to handle with traditional data processing tools.

Other data engineering terms related to
Data Transformation:


Aligning data can mean one of three things: aligning datasets, meeting business rules or arranging data elements in memory.

Clean or Cleanse

Remove invalid or inconsistent data values, such as empty fields or outliers.


Group data points based on similarities or patterns to facilitate analysis and modeling.


Remove noise or artifacts from data to improve its accuracy and quality.


Optimize data for faster read access by reducing the number of joins needed to retrieve related data.


Transform continuous data into discrete categories or bins to simplify analysis.


Extract, transform, and load data between different systems.


Extract a subset of data based on specific criteria or conditions.


Convert data into a linear format for efficient storage and processing.


Fill in missing data values with estimated or imputed values to facilitate analysis.


See 'wrangle'.


Standardize data values to facilitate comparison and analysis. organize data into a consistent format.


Convert a large set of data into a smaller, more manageable form without significant loss of information.


Change the structure of data to better fit specific analysis or modeling requirements.


Convert data into a linear format for efficient storage and processing.


Break down large datasets into smaller, more manageable pieces for easier processing and analysis.


An imbalance in the distribution or representation of data.


Transform data to a common unit or format to facilitate comparison and analysis.


Convert data into tokens or smaller units to simplify analysis or processing.


Convert data from one format or structure to another.


Convert unstructured data into a structured format.