• Parquet: Open-source columnar format for Hadoop (1 of 6)

    published: 21 Nov 2014
  • Parquet: Open-source columnar format for Hadoop (3 of 6)

    published: 21 Nov 2014
  • Parquet: Open-source columnar format for Hadoop (2 of 6)

    published: 21 Nov 2014
  • Parquet: Open-source columnar format for Hadoop (4 of 6)

    published: 21 Nov 2014
  • Parquet: Open-source columnar format for Hadoop (6 of 6)

    published: 21 Nov 2014
  • Parquet: Open-source columnar format for Hadoop (5 of 6)

    published: 21 Nov 2014
  • Parquet Format at Twitter

    Julien Le Dem discusses Parquet, a columnar file format for Hadoop. Performance and compression benefits of using columnar storage formats for storing and processing large amounts of data are well documented in academic literature as well as several commercial analytical databases. Parquet supports deeply nested structures, efficient encoding and column compression schemes, and is designed to be compatible with a variety of higher-level type systems. Its integration in most of the Hadoop processing frameworks (Impala, Hive, Pig, Cascading, Crunch, Scalding, Spark, ...) and serialization models (Thrift, Avro, Protocol Buffers, ...) makes it easy to use in existing ETL and processing pipelines, while giving flexibility of choice on the query engine (whether in Java or C++). Join the conver...

    published: 18 Apr 2014
  • Uwe L Korn - Efficient and portable DataFrame storage with Apache Parquet

    Filmed at PyData London 2017 www.pydata.org Description Apache Parquet is the most used columnar data format in the big data processing space and recently gained Pandas support. It leverages various techniques to store data in a CPU and I/O efficient way and provides capabilities to push-down queries to the I/O layer. In this talk, it is shown how to use it in Python, detail its structure and present the portable usage with other tools. Abstract Since its creation in 2013, Apache Parquet has risen to be the most widely used binary columnar storage format in the big data processing space. While supporting basic attributes of a columnar format like reading a subset of columns, it also leverages techniques to store the data efficiently while providing fast access. In addition the format is ...

    published: 15 May 2017
  • Apache Parquet & Apache Spark

    - Overview of Apache Parquet and key benefits of using Apache Parquet. - Demo of using Apache Spark with Apache Parquet

    published: 16 Jun 2016
  • UNILIN production process parquet

    Take a look behind the scenes and find out how UNILIN manufactures its parquet hardwood floors. In this 30-minute explanatory movie, you follow a piece of wood as it travels through the factories in Czech and Malaysia and is being transformed from tree trunk to finished, ready-to-use hardwood floor.

    published: 18 Nov 2015
  • Spark + Parquet In Depth: Spark Summit East talk by: Emily Curtin and Robbie Strickland

    published: 14 Feb 2017
  • Adaptive Data Prep Innovations Using Caching & Apache Parquet

    Nenshad Bardoliwalla discusses enabling efficiency in adaptive data preparation through utilizing caching infrastructure and Parquet columnar formats. These techniques are used the Fall 2014 Paxata release that features an in-line and remove node caching server for materialized operators, as well as Apache Parquet for persisted compression on top of Paxata's columnar database.

    published: 17 Oct 2014
  • Spark Reading and Writing to Parquet Storage Format

    Spark: Reading and Writing to Parquet Format -------------------------------------------------------------------------- - Using Spark Data Frame save capability - Code/Approach works on both local HDD and in HDFS environments Related video: Introduction to Apache Spark and Parquet, https://www.youtube.com/watch?v=itm0TINmK9k Code for demo case class Person(name: String, age: Int, sex:String) val data = Seq(Person("Jack", 25,"M"), Person("Jill", 25,"F"), Person("Jess", 24,"F")) val df = data.toDF() import org.apache.spark.sql.SaveMode df.select("name", "age", "sex").write.mode(SaveMode.Append).format("parquet").save("/tmp/person") df.select("name", "age", "sex").write.partitionBy("sex").mode(SaveMode.Append).format("parquet").save("/tmp/person_partitioned/") val sqlContext = new org....

    published: 19 Nov 2016
  • Parquet vs Avro

    In this video we will cover the pros-cons of 2 Popular file formats used in the Hadoop ecosystem namely Apache Parquet and Apache Avro Agenda: Where these formats are used Similarities Key Considerations when choosing: -Read vs Write Characteristics -Tooling -Schema Evolution General guidelines -Scenarios to keep data in both Parquet and Avro Avro is a row-based storage format for Hadoop. However Avro is more than a serialisation framework its also an IPC framework Parquet is a column-based storage format for Hadoop. Both highly optimised (vs pain text), both are self describing , uses compression If your use case typically scans or retrieves all of the fields in a row in each query, Avro is usually the best choice. If your dataset has many columns, and your use case typically inv...

    published: 16 Feb 2017
  • Apache Parquet: Parquet file internals and inspecting Parquet file structure

    In this video we will look at the inernal structure of the Apache Parquet storage format and will use the Parquet-tool to inspect the contents of the file. Apache Parquet is a columnar storage format available in the Hadoop ecosystem Related videos: Creating Parquet files using Apache Spark: https://youtu.be/-ra0pGUw7fo Parquet vs Avro: https://youtu.be/sLuHzdMGFNA

    published: 22 Apr 2017
  • Apache Parquet 1 : Introduction

    Ramathan moubarek :)

    published: 28 May 2017
  • Parquet etl

    published: 11 Jun 2015
  • 0605 Efficient Data Storage for Analytics with Parquet 2 0

    published: 23 Jun 2014
  • The columnar roadmap Apache Parquet and Apache Arrow

    published: 20 Jun 2017
  • Even Faster When Presto meets Parquet @ Uber

    published: 20 Jun 2017
  • Колоночные БД на примере Parquet

    http://0x1.tv/20170422CC Колоночные БД на примере Parquet (Леонид Блохин, SECON-2017) * Леонид Блохин ------------- * Отличия строковых и колоночных баз данных. * Apache Parquet, области применения, преимущества которые он дает, сравнение с другими колоночными базами данных. * Apache Spark, области применения, отличительные особенности, приемущества и недостатки, работа с parquet файлами в Hadoop File System. * RDD, DataFrames, и Datasets в Apache Spark, зачем они нужны, как ими пользоваться, какие профиты. * Mist, используем Spark, как сервис с REST API

    published: 01 Jul 2017
  • Format Wars: from VHS and Beta to Avro and Parquet | DataEngConf SF '17

    Recorded at DataEngConf SF '17 You have your Hadoop cluster, and you are ready to fill it up with data, but wait: Which format should you use to store your data? Should you store it in Plain Text, Sequence File, Avro, or Parquet? (And should you compress it?) HDFS or Block/Object Store? Which query engine? This talk will take a closer look at some of the trade-offs, and will cover the How, Why, and When of choosing one format over another. Picking your distribution and platform is just the first decision of many you need to make in order to create a successful data ecosystem. In addition to things like replication factor and node configuration, the choice of file format can have a profound impact on cluster performance. Each of the data formats have different strengths and weaknesses, de...

    published: 27 Jun 2017
  • Apache Drill SQL Queries on Parquet Data | Whiteboard Walkthrough

    In this Whiteboard Walkthrough Parth Chandra, Chair of PMC for Apache Drill project and member of MapR engineering team, describes how the Apache Drill SQL query engine reads data in Parquet format and some of the best practices to get maximum performance from Parquet. Additional Apache Drill resources: "Overview Apache Drill’s Query Execution Capabilities" Whiteboard Walkthrough video https://www.mapr.com/blog/big-data-sql-overview-apache-drill-query-execution-capabilities-whiteboard-walkthrough "SQL Query on Mixed Schema Data Using Apache Drill” blog post https://www.mapr.com/blog/sql-query-mixed-schema-data-using-apache-drill Free download Apache Drill on MapR sandbox https://www.mapr.com/products/mapr-sandbox-hadoop/download-sandbox-drill

    published: 12 Oct 2016
  • File Format Benchmark Avro JSON ORC and Parquet

    published: 29 Jun 2016
  • Parquet: Open-source columnar format for Hadoop (1 of 6)

    published: 21 Nov 2014
  • Parquet: Open-source columnar format for Hadoop (2 of 6)

    published: 21 Nov 2014
  • Parquet: Open-source columnar format for Hadoop (3 of 6)

    published: 21 Nov 2014
  • Parquet: Open-source columnar format for Hadoop (4 of 6)

    published: 21 Nov 2014
  • Parquet: Open-source columnar format for Hadoop (5 of 6)

    published: 21 Nov 2014
developed with YouTube
Parquet: Open-source columnar format for Hadoop (1 of 6)

Parquet: Open-source columnar format for Hadoop (1 of 6)

  • Order:
  • Duration: 15:01
  • Updated: 21 Nov 2014
  • views: 9615
videos
https://wn.com/Parquet_Open_Source_Columnar_Format_For_Hadoop_(1_Of_6)
Parquet: Open-source columnar format for Hadoop (3 of 6)

Parquet: Open-source columnar format for Hadoop (3 of 6)

  • Order:
  • Duration: 15:01
  • Updated: 21 Nov 2014
  • views: 2982
videos
https://wn.com/Parquet_Open_Source_Columnar_Format_For_Hadoop_(3_Of_6)
Parquet: Open-source columnar format for Hadoop (2 of 6)

Parquet: Open-source columnar format for Hadoop (2 of 6)

  • Order:
  • Duration: 15:01
  • Updated: 21 Nov 2014
  • views: 4342
videos
https://wn.com/Parquet_Open_Source_Columnar_Format_For_Hadoop_(2_Of_6)
Parquet: Open-source columnar format for Hadoop (4 of 6)

Parquet: Open-source columnar format for Hadoop (4 of 6)

  • Order:
  • Duration: 15:01
  • Updated: 21 Nov 2014
  • views: 2006
videos
https://wn.com/Parquet_Open_Source_Columnar_Format_For_Hadoop_(4_Of_6)
Parquet: Open-source columnar format for Hadoop (6 of 6)

Parquet: Open-source columnar format for Hadoop (6 of 6)

  • Order:
  • Duration: 22:02
  • Updated: 21 Nov 2014
  • views: 705
videos
https://wn.com/Parquet_Open_Source_Columnar_Format_For_Hadoop_(6_Of_6)
Parquet: Open-source columnar format for Hadoop (5 of 6)

Parquet: Open-source columnar format for Hadoop (5 of 6)

  • Order:
  • Duration: 15:01
  • Updated: 21 Nov 2014
  • views: 998
videos
https://wn.com/Parquet_Open_Source_Columnar_Format_For_Hadoop_(5_Of_6)
Parquet Format at Twitter

Parquet Format at Twitter

  • Order:
  • Duration: 23:45
  • Updated: 18 Apr 2014
  • views: 9004
videos
Julien Le Dem discusses Parquet, a columnar file format for Hadoop. Performance and compression benefits of using columnar storage formats for storing and processing large amounts of data are well documented in academic literature as well as several commercial analytical databases. Parquet supports deeply nested structures, efficient encoding and column compression schemes, and is designed to be compatible with a variety of higher-level type systems. Its integration in most of the Hadoop processing frameworks (Impala, Hive, Pig, Cascading, Crunch, Scalding, Spark, ...) and serialization models (Thrift, Avro, Protocol Buffers, ...) makes it easy to use in existing ETL and processing pipelines, while giving flexibility of choice on the query engine (whether in Java or C++). Join the conversation at http://twitter.com/university
https://wn.com/Parquet_Format_At_Twitter
Uwe L  Korn - Efficient and portable DataFrame storage with Apache Parquet

Uwe L Korn - Efficient and portable DataFrame storage with Apache Parquet

  • Order:
  • Duration: 28:31
  • Updated: 15 May 2017
  • views: 443
videos
Filmed at PyData London 2017 www.pydata.org Description Apache Parquet is the most used columnar data format in the big data processing space and recently gained Pandas support. It leverages various techniques to store data in a CPU and I/O efficient way and provides capabilities to push-down queries to the I/O layer. In this talk, it is shown how to use it in Python, detail its structure and present the portable usage with other tools. Abstract Since its creation in 2013, Apache Parquet has risen to be the most widely used binary columnar storage format in the big data processing space. While supporting basic attributes of a columnar format like reading a subset of columns, it also leverages techniques to store the data efficiently while providing fast access. In addition the format is structured in such a fashion that when supplied to a query engine, Parquet provides indexing hints and statistics to quickly skip over chunks of irrelevant data. In recent months, efficient implementations to load and store Parquet files in Python became available, bringing the efficiency of the format to Pandas DataFrames. While this provides a new option to store DataFrames, it especially allows us to share data between Pandas and a lot of other popular systems like Apache Spark or Apache Impala. In this talk we will show the improvements that Parquet bring performance-wise but also will highlight important aspects of the format that make it portable and efficient for queries on large amount of data. As not all features are yet available in Python, an overview of the upcoming Python-specific improvements and how the Parquet format will be extended in general is given at the end of the talk. PyData is an educational program of NumFOCUS, a 501(c)3 non-profit organization in the United States. PyData provides a forum for the international community of users and developers of data analysis tools to share ideas and learn from each other. The global PyData network promotes discussion of best practices, new approaches, and emerging technologies for data management, processing, analytics, and visualization. PyData communities approach data science using many languages, including (but not limited to) Python, Julia, and R. We aim to be an accessible, community-driven conference, with novice to advanced level presentations. PyData tutorials and talks bring attendees the latest project features along with cutting-edge use cases.
https://wn.com/Uwe_L_Korn_Efficient_And_Portable_Dataframe_Storage_With_Apache_Parquet
Apache Parquet & Apache Spark

Apache Parquet & Apache Spark

  • Order:
  • Duration: 13:43
  • Updated: 16 Jun 2016
  • views: 6439
videos
- Overview of Apache Parquet and key benefits of using Apache Parquet. - Demo of using Apache Spark with Apache Parquet
https://wn.com/Apache_Parquet_Apache_Spark
UNILIN production process parquet

UNILIN production process parquet

  • Order:
  • Duration: 28:35
  • Updated: 18 Nov 2015
  • views: 8011
videos
Take a look behind the scenes and find out how UNILIN manufactures its parquet hardwood floors. In this 30-minute explanatory movie, you follow a piece of wood as it travels through the factories in Czech and Malaysia and is being transformed from tree trunk to finished, ready-to-use hardwood floor.
https://wn.com/Unilin_Production_Process_Parquet
Spark + Parquet In Depth: Spark Summit East talk by: Emily Curtin and Robbie Strickland

Spark + Parquet In Depth: Spark Summit East talk by: Emily Curtin and Robbie Strickland

  • Order:
  • Duration: 29:50
  • Updated: 14 Feb 2017
  • views: 6484
videos
https://wn.com/Spark_Parquet_In_Depth_Spark_Summit_East_Talk_By_Emily_Curtin_And_Robbie_Strickland
Adaptive Data Prep Innovations Using Caching & Apache Parquet

Adaptive Data Prep Innovations Using Caching & Apache Parquet

  • Order:
  • Duration: 1:29
  • Updated: 17 Oct 2014
  • views: 460
videos
Nenshad Bardoliwalla discusses enabling efficiency in adaptive data preparation through utilizing caching infrastructure and Parquet columnar formats. These techniques are used the Fall 2014 Paxata release that features an in-line and remove node caching server for materialized operators, as well as Apache Parquet for persisted compression on top of Paxata's columnar database.
https://wn.com/Adaptive_Data_Prep_Innovations_Using_Caching_Apache_Parquet
Spark  Reading and Writing to Parquet Storage Format

Spark Reading and Writing to Parquet Storage Format

  • Order:
  • Duration: 11:28
  • Updated: 19 Nov 2016
  • views: 2296
videos
Spark: Reading and Writing to Parquet Format -------------------------------------------------------------------------- - Using Spark Data Frame save capability - Code/Approach works on both local HDD and in HDFS environments Related video: Introduction to Apache Spark and Parquet, https://www.youtube.com/watch?v=itm0TINmK9k Code for demo case class Person(name: String, age: Int, sex:String) val data = Seq(Person("Jack", 25,"M"), Person("Jill", 25,"F"), Person("Jess", 24,"F")) val df = data.toDF() import org.apache.spark.sql.SaveMode df.select("name", "age", "sex").write.mode(SaveMode.Append).format("parquet").save("/tmp/person") df.select("name", "age", "sex").write.partitionBy("sex").mode(SaveMode.Append).format("parquet").save("/tmp/person_partitioned/") val sqlContext = new org.apache.spark.sql.SQLContext(sc) val dfPerson = sqlContext.read.parquet("/tmp/person")
https://wn.com/Spark_Reading_And_Writing_To_Parquet_Storage_Format
Parquet vs Avro

Parquet vs Avro

  • Order:
  • Duration: 13:28
  • Updated: 16 Feb 2017
  • views: 4123
videos
In this video we will cover the pros-cons of 2 Popular file formats used in the Hadoop ecosystem namely Apache Parquet and Apache Avro Agenda: Where these formats are used Similarities Key Considerations when choosing: -Read vs Write Characteristics -Tooling -Schema Evolution General guidelines -Scenarios to keep data in both Parquet and Avro Avro is a row-based storage format for Hadoop. However Avro is more than a serialisation framework its also an IPC framework Parquet is a column-based storage format for Hadoop. Both highly optimised (vs pain text), both are self describing , uses compression If your use case typically scans or retrieves all of the fields in a row in each query, Avro is usually the best choice. If your dataset has many columns, and your use case typically involves working with a subset of those columns rather than entire records, Parquet is optimized for that kind of work. Finally in the video we will cover cases where you may use both file formats
https://wn.com/Parquet_Vs_Avro
Apache Parquet: Parquet file internals and inspecting Parquet file structure

Apache Parquet: Parquet file internals and inspecting Parquet file structure

  • Order:
  • Duration: 24:38
  • Updated: 22 Apr 2017
  • views: 2276
videos
In this video we will look at the inernal structure of the Apache Parquet storage format and will use the Parquet-tool to inspect the contents of the file. Apache Parquet is a columnar storage format available in the Hadoop ecosystem Related videos: Creating Parquet files using Apache Spark: https://youtu.be/-ra0pGUw7fo Parquet vs Avro: https://youtu.be/sLuHzdMGFNA
https://wn.com/Apache_Parquet_Parquet_File_Internals_And_Inspecting_Parquet_File_Structure
Apache Parquet 1 : Introduction

Apache Parquet 1 : Introduction

  • Order:
  • Duration: 3:00
  • Updated: 28 May 2017
  • views: 144
videos
Ramathan moubarek :)
https://wn.com/Apache_Parquet_1_Introduction
Parquet etl

Parquet etl

  • Order:
  • Duration: 14:28
  • Updated: 11 Jun 2015
  • views: 5
videos
https://wn.com/Parquet_Etl
0605 Efficient Data Storage for Analytics with Parquet 2 0

0605 Efficient Data Storage for Analytics with Parquet 2 0

  • Order:
  • Duration: 41:59
  • Updated: 23 Jun 2014
  • views: 49854
videos
https://wn.com/0605_Efficient_Data_Storage_For_Analytics_With_Parquet_2_0
The columnar roadmap  Apache Parquet and Apache Arrow

The columnar roadmap Apache Parquet and Apache Arrow

  • Order:
  • Duration: 42:41
  • Updated: 20 Jun 2017
  • views: 678
videos
https://wn.com/The_Columnar_Roadmap_Apache_Parquet_And_Apache_Arrow
Even Faster  When Presto meets Parquet @ Uber

Even Faster When Presto meets Parquet @ Uber

  • Order:
  • Duration: 36:23
  • Updated: 20 Jun 2017
  • views: 120
videos
https://wn.com/Even_Faster_When_Presto_Meets_Parquet_Uber
Колоночные БД на примере Parquet

Колоночные БД на примере Parquet

  • Order:
  • Duration: 39:25
  • Updated: 01 Jul 2017
  • views: 23
videos
http://0x1.tv/20170422CC Колоночные БД на примере Parquet (Леонид Блохин, SECON-2017) * Леонид Блохин ------------- * Отличия строковых и колоночных баз данных. * Apache Parquet, области применения, преимущества которые он дает, сравнение с другими колоночными базами данных. * Apache Spark, области применения, отличительные особенности, приемущества и недостатки, работа с parquet файлами в Hadoop File System. * RDD, DataFrames, и Datasets в Apache Spark, зачем они нужны, как ими пользоваться, какие профиты. * Mist, используем Spark, как сервис с REST API
https://wn.com/Колоночные_Бд_На_Примере_Parquet
Format Wars: from VHS and Beta to Avro and Parquet | DataEngConf SF '17

Format Wars: from VHS and Beta to Avro and Parquet | DataEngConf SF '17

  • Order:
  • Duration: 41:58
  • Updated: 27 Jun 2017
  • views: 175
videos
Recorded at DataEngConf SF '17 You have your Hadoop cluster, and you are ready to fill it up with data, but wait: Which format should you use to store your data? Should you store it in Plain Text, Sequence File, Avro, or Parquet? (And should you compress it?) HDFS or Block/Object Store? Which query engine? This talk will take a closer look at some of the trade-offs, and will cover the How, Why, and When of choosing one format over another. Picking your distribution and platform is just the first decision of many you need to make in order to create a successful data ecosystem. In addition to things like replication factor and node configuration, the choice of file format can have a profound impact on cluster performance. Each of the data formats have different strengths and weaknesses, depending on how you want to store and retrieve your data. For instance, we have observed performance differences on the order of 25x between Parquet and Plain Text files for certain workloads. However, it isn’t the case that one is always better than the others. Adding to the data formats selection is which query engine works best for the data format & workload. Oh lets not forget the question: “Do I store that in HDFS or a block/object store?” This talk will take a closer look at some of these trade-offs. Attendees will learn, based on a few real world use cases, the How, Why, and When of choosing one format over another (and will your choice of query engine affect this.). Covering the four major data formats (Plain Text, Sequence Files, Avro, and Parquet) we will provide insight into what they are and how to best use and store them in HDFS or a block/object store. Speakers: Stephen O'Sullivan & Silvia Oliveros-Torres
https://wn.com/Format_Wars_From_Vhs_And_Beta_To_Avro_And_Parquet_|_Dataengconf_Sf_'17
Apache Drill SQL Queries on Parquet Data | Whiteboard Walkthrough

Apache Drill SQL Queries on Parquet Data | Whiteboard Walkthrough

  • Order:
  • Duration: 10:09
  • Updated: 12 Oct 2016
  • views: 1608
videos
In this Whiteboard Walkthrough Parth Chandra, Chair of PMC for Apache Drill project and member of MapR engineering team, describes how the Apache Drill SQL query engine reads data in Parquet format and some of the best practices to get maximum performance from Parquet. Additional Apache Drill resources: "Overview Apache Drill’s Query Execution Capabilities" Whiteboard Walkthrough video https://www.mapr.com/blog/big-data-sql-overview-apache-drill-query-execution-capabilities-whiteboard-walkthrough "SQL Query on Mixed Schema Data Using Apache Drill” blog post https://www.mapr.com/blog/sql-query-mixed-schema-data-using-apache-drill Free download Apache Drill on MapR sandbox https://www.mapr.com/products/mapr-sandbox-hadoop/download-sandbox-drill
https://wn.com/Apache_Drill_Sql_Queries_On_Parquet_Data_|_Whiteboard_Walkthrough
File Format Benchmark Avro JSON ORC and Parquet

File Format Benchmark Avro JSON ORC and Parquet

  • Order:
  • Duration: 39:59
  • Updated: 29 Jun 2016
  • views: 3910
videos
https://wn.com/File_Format_Benchmark_Avro_Json_Orc_And_Parquet
Parquet: Open-source columnar format for Hadoop (1 of 6)

Parquet: Open-source columnar format for Hadoop (1 of 6)

  • Order:
  • Duration: 15:01
  • Updated: 21 Nov 2014
  • views: 8055
videos
https://wn.com/Parquet_Open_Source_Columnar_Format_For_Hadoop_(1_Of_6)
Parquet: Open-source columnar format for Hadoop (2 of 6)

Parquet: Open-source columnar format for Hadoop (2 of 6)

  • Order:
  • Duration: 15:01
  • Updated: 21 Nov 2014
  • views: 3680
videos
https://wn.com/Parquet_Open_Source_Columnar_Format_For_Hadoop_(2_Of_6)
Parquet: Open-source columnar format for Hadoop (3 of 6)

Parquet: Open-source columnar format for Hadoop (3 of 6)

  • Order:
  • Duration: 15:01
  • Updated: 21 Nov 2014
  • views: 2547
videos
https://wn.com/Parquet_Open_Source_Columnar_Format_For_Hadoop_(3_Of_6)
Parquet: Open-source columnar format for Hadoop (4 of 6)

Parquet: Open-source columnar format for Hadoop (4 of 6)

  • Order:
  • Duration: 15:01
  • Updated: 21 Nov 2014
  • views: 1725
videos
https://wn.com/Parquet_Open_Source_Columnar_Format_For_Hadoop_(4_Of_6)
Parquet: Open-source columnar format for Hadoop (5 of 6)

Parquet: Open-source columnar format for Hadoop (5 of 6)

  • Order:
  • Duration: 15:01
  • Updated: 21 Nov 2014
  • views: 867
videos
https://wn.com/Parquet_Open_Source_Columnar_Format_For_Hadoop_(5_Of_6)
×