Apache Parquet Explained

Apache Parquet
Latest Release Version:2.9.0
Latest Release Date:[1]
Latest Release Version:1.11.0[2]
Latest Release Date: -->
Operating System:Cross-platform
Programming Language:Java (reference implementation)[3]
Genre:Column-oriented DBMS
License:Apache License 2.0

Apache Parquet is a free and open-source column-oriented data storage format in the Apache Hadoop ecosystem. It is similar to RCFile and ORC, the other columnar-storage file formats in Hadoop, and is compatible with most of the data processing frameworks around Hadoop. It provides efficient data compression and encoding schemes with enhanced performance to handle complex data in bulk.

History

The open-source project to build Apache Parquet began as a joint effort between Twitter[4] and Cloudera.[5] Parquet was designed as an improvement on the Trevni columnar storage format created by Doug Cutting, the creator of Hadoop. The first version, Apache Parquet1.0, was released in July 2013. Since April 27, 2015, Apache Parquet has been a top-level Apache Software Foundation (ASF)-sponsored project.[6] [7]

Features

Apache Parquet is implemented using the record-shredding and assembly algorithm,[8] which accommodates the complex data structures that can be used to store data.[9] The values in each column are stored in contiguous memory locations, providing the following benefits:[10]

Apache Parquet is implemented using the Apache Thrift framework, which increases its flexibility; it can work with a number of programming languages like C++, Java, Python, PHP, etc.[11]

As of August 2015,[12] Parquet supports the big-data-processing frameworks including Apache Hive, Apache Drill, Apache Impala, Apache Crunch, Apache Pig, Cascading, Presto and Apache Spark. It is one of external data formats used by pandas Python data manipulation and analysis library.

Compression and encoding

In Parquet, compression is performed column by column, which enables different encoding schemes to be used for text and integer data. This strategy also keeps the door open for newer and better encoding schemes to be implemented as they are invented.

Parquet has an automatic dictionary encoding enabled dynamically for data with a small number of unique values (i.e. below 105) that enables significant compression and boosts processing speed.[13]

Bit packing

Storage of integers is usually done with dedicated 32 or 64 bits per integer. For small integers, packing multiple integers into the same space makes storage more efficient.

Run-length encoding (RLE)

To optimize storage of multiple occurrences of the same value, a single value is stored once along with the number of occurrences.

Parquet implements a hybrid of bit packing and RLE, in which the encoding switches based on which produces the best compression results. This strategy works well for certain types of integer data and combines well with dictionary encoding.

Comparison

Apache Parquet is comparable to RCFile and Optimized Row Columnar (ORC) file formats all three fall under the category of columnar data storage within the Hadoop ecosystem. They all have better compression and encoding with improved read performance at the cost of slower writes. In addition to these features, Apache Parquet supports limited schema evolution, i.e., the schema can be modified according to the changes in the data. It also provides the ability to add new columns and merge schemas that do not conflict.

Apache Arrow is designed as an in-memory complement to on-disk columnar formats like Parquet and ORC. The Arrow and Parquet projects include libraries that allow for reading and writing between the two formats.

See also

External links

Notes and References

  1. Web site: Apache Parquet – Releases . Apache.org . 22 February 2023 . 22 February 2023 . https://web.archive.org/web/20230222213151/https://parquet.apache.org/blog/ . live .
  2. Web site: Github releases.
  3. Web site: Parquet-MR source code. GitHub. 2 July 2019. 11 June 2018. https://web.archive.org/web/20180611015409/https://github.com/apache/parquet-mr. live.
  4. Web site: Release Date. 2016-09-12. 2016-10-20. https://web.archive.org/web/20161020154829/https://blog.twitter.com/2013/announcing-parquet-10-columnar-storage-for-hadoop. live.
  5. Web site: Introducing Parquet: Efficient Columnar Storage for Apache Hadoop - Cloudera Engineering Blog. https://web.archive.org/web/20130504133255/http://blog.cloudera.com/blog/2013/03/introducing-parquet-columnar-storage-for-apache-hadoop/. dead. 2013-05-04. 2013-03-13. en-US. 2018-10-22.
  6. Web site: Apache Parquet paves the way for better Hadoop data storage. 28 April 2015. 21 May 2017. 31 May 2017. https://web.archive.org/web/20170531130443/http://www.infoworld.com/article/2915565/big-data/apache-parquet-paves-the-way-towards-better-hadoop-data-storage.html. live.
  7. Web site: The Apache Software Foundation Announces Apache™ Parquet™ as a Top-Level Project : The Apache Software Foundation Blog. 27 April 2015. 21 May 2017. 20 August 2017. https://web.archive.org/web/20170820074502/https://blogs.apache.org/foundation/entry/the_apache_software_foundation_announces75. live.
  8. Web site: The striping and assembly algorithms from the Google-inspired Dremel paper. github. 13 November 2017. 26 October 2020. https://web.archive.org/web/20201026095824/https://github.com/julienledem/redelm/wiki/The-striping-and-assembly-algorithms-from-the-Dremel-paper. live.
  9. Web site: Apache Parquet Documentation. 2016-09-12. 2016-09-05. https://web.archive.org/web/20160905094320/http://parquet.apache.org/documentation/latest/. dead.
  10. Web site: Apache Parquet Cloudera. 2016-09-12. 2016-09-19. https://web.archive.org/web/20160919220719/http://www.cloudera.com/documentation/enterprise/5-5-x/topics/cdh_ig_parquet.html. live.
  11. Web site: Apache Thrift. 2016-09-14. 2021-03-12. https://web.archive.org/web/20210312025614/http://thrift.apache.org/. live.
  12. Web site: Supported Frameworks. 2016-09-12. 2015-02-02. https://web.archive.org/web/20150202145641/https://cwiki.apache.org/confluence/display/Hive/Parquet. live.
  13. Web site: Announcing Parquet 1.0: Columnar Storage for Hadoop Twitter Blogs. blog.twitter.com. 2016-09-14. 2016-10-20. https://web.archive.org/web/20161020154829/https://blog.twitter.com/2013/announcing-parquet-10-columnar-storage-for-hadoop. live.