Back to Glossary Index

Big Data Processing

Process large volumes of data in parallel and distributed computing environments to improve performance.

In data engineering, "big data" refers to datasets that are so large and complex that they cannot be easily processed or analyzed using traditional data processing tools and techniques. The exact volume of data that qualifies as "big" can vary depending on the context, but in general, it is characterized by the three Vs: volume, velocity, and variety.

  • Volume refers to the amount of data that needs to be processed. Big data sets typically start at the terabyte (TB) scale and can go up to petabytes (PB) or even exabytes (EB) of data.
  • Velocity refers to the speed at which data is generated and needs to be processed. With the advent of real-time data streaming, big data systems are expected to handle data at high velocities and in near-real-time.
  • Variety refers to the diversity of data types and formats that need to be processed. Big data systems are expected to handle structured, semi-structured, and unstructured data from a variety of sources, including social media, sensors, and logs.

If you are new to data engineering, it is unlikely that you will need to worry about Big Data architectures. In fact, the rapid scaling up of computing power, storage and high speed networking means that data challenges that would have been considered “big data” a decade ago can now be comfortably managed on a local machine, as discussed in this article.

Finally, it is worth noting that the term "big data" can mean different things to different people. It may refer to any large amount of data, or it could refer specifically to datasets that are too large to handle with traditional data processing tools.

Other data engineering terms related to
Data Processing: