apache-spark - Spark 谓词下推未按预期工作

标签 apache-spark apache-spark-sql partitioning parquet

我对 Spark 的谓词下推行为有疑问。似乎有些不对劲。
我在 MacOS 上使用 Spark 2.4.5 版

下面是我的示例 csv 数据 results2.csv

enter image description here

val df = spark.read.option("header", "true").csv("/Users/apple/kaggle-data/results2.csv")

2 列分区:国家和城市

df.repartition($"country",$"city").write.option("header", "true").partitionBy("country","city").parquet("/Users/apple/kaggle-data/part2/")

1 列分区:国家

val df2 = spark.read.option("header", "true").csv("/Users/apple/kaggle-data/results2.csv")
df2.repartition($"country").write.option("header", "true").partitionBy("country").parquet("/Users/apple/kaggle-data/part1/")

我仅在国家/地区读取分区数据并查询谓词国家/地区和城市,但下推过滤器显示的城市不是预期的,我期待国家/地区在这里

val kaggleDf1 = spark.read.option("header", "true").parquet("/Users/apple/kaggle-data/part1/") 
kaggleDf1.where($"country" === "England" && $"city" === "London").explain(true)

计划
== Parsed Logical Plan ==
'Filter (('country = England) && ('city = London))
+- Relation[date#138,home_team#139,away_team#140,home_score#141,away_score#142,tournament#143,city#144,neutral#145,country#146] parquet

== Analyzed Logical Plan ==
date: string, home_team: string, away_team: string, home_score: string, away_score: string, tournament: string, city: string, neutral: string, country: string
Filter ((country#146 = England) && (city#144 = London))
+- Relation[date#138,home_team#139,away_team#140,home_score#141,away_score#142,tournament#143,city#144,neutral#145,country#146] parquet

== Optimized Logical Plan ==
Filter (((isnotnull(country#146) && isnotnull(city#144)) && (country#146 = England)) && (city#144 = London))
+- Relation[date#138,home_team#139,away_team#140,home_score#141,away_score#142,tournament#143,city#144,neutral#145,country#146] parquet

== Physical Plan ==
*(1) Project [date#138, home_team#139, away_team#140, home_score#141, away_score#142, tournament#143, city#144, neutral#145, country#146]
+- *(1) Filter (isnotnull(city#144) && (city#144 = London))
   +- *(1) FileScan parquet [date#138,home_team#139,away_team#140,home_score#141,away_score#142,tournament#143,city#144,neutral#145,country#146] Batched: true, Format: Parquet, Location: InMemoryFileIndex[/Users/apple/kaggle-data/part1], PartitionCount: 1, PartitionFilters: [isnotnull(country#146), (country#146 = England)], ***PushedFilters: [IsNotNull(city), EqualTo(city,London)]***, ReadSchema: struct<date:string,home_team:string,away_team:string,home_score:string,away_score:string,tourname...

我仅在国家/地区读取分区数据并在谓词国家/地区查询,但下推过滤器显示为空,这是不期望的,我期待国家/地区在这里

kaggleDf1.where($"country" === "England").explain(true)

计划:
== Parsed Logical Plan ==
'Filter ('country = England)
+- Relation[date#138,home_team#139,away_team#140,home_score#141,away_score#142,tournament#143,city#144,neutral#145,country#146] parquet

== Analyzed Logical Plan ==
date: string, home_team: string, away_team: string, home_score: string, away_score: string, tournament: string, city: string, neutral: string, country: string
Filter (country#146 = England)
+- Relation[date#138,home_team#139,away_team#140,home_score#141,away_score#142,tournament#143,city#144,neutral#145,country#146] parquet

== Optimized Logical Plan ==
Filter (isnotnull(country#146) && (country#146 = England))
+- Relation[date#138,home_team#139,away_team#140,home_score#141,away_score#142,tournament#143,city#144,neutral#145,country#146] parquet

== Physical Plan ==
*(1) FileScan parquet [date#138,home_team#139,away_team#140,home_score#141,away_score#142,tournament#143,city#144,neutral#145,country#146] Batched: true, Format: Parquet, Location: InMemoryFileIndex[/Users/apple/kaggle-data/part1], PartitionCount: 1, PartitionFilters: [isnotnull(country#146), (country#146 = England)], ***PushedFilters: []***, ReadSchema: struct<date:string,home_team:string,away_team:string,home_score:string,away_score:string,tourname...

我读取了国家和城市分区的数据并查询谓词国家和城市,但下推过滤器显示为空,这是不期望的,我期待国家和城市在这里

val kaggleDf2 = spark.read.option("header", "true").parquet("/Users/apple/kaggle-data/part2/")
kaggleDf2.where($"country" === "England" && $"city" === "London").explain(true)

计划:
== Parsed Logical Plan ==
'Filter (('country = England) && ('city = London))
+- Relation[date#158,home_team#159,away_team#160,home_score#161,away_score#162,tournament#163,neutral#164,country#165,city#166] parquet

== Analyzed Logical Plan ==
date: string, home_team: string, away_team: string, home_score: string, away_score: string, tournament: string, neutral: string, country: string, city: string
Filter ((country#165 = England) && (city#166 = London))
+- Relation[date#158,home_team#159,away_team#160,home_score#161,away_score#162,tournament#163,neutral#164,country#165,city#166] parquet

== Optimized Logical Plan ==
Filter (((isnotnull(country#165) && isnotnull(city#166)) && (country#165 = England)) && (city#166 = London))
+- Relation[date#158,home_team#159,away_team#160,home_score#161,away_score#162,tournament#163,neutral#164,country#165,city#166] parquet

== Physical Plan ==
*(1) FileScan parquet [date#158,home_team#159,away_team#160,home_score#161,away_score#162,tournament#163,neutral#164,country#165,city#166] Batched: true, Format: Parquet, Location: InMemoryFileIndex[/Users/apple/kaggle-data/part2], PartitionCount: 1, PartitionFilters: [isnotnull(country#165), isnotnull(city#166), (country#165 = England), (city#166 = London)], ***PushedFilters: []***, ReadSchema: struct<date:string,home_team:string,away_team:string,home_score:string,away_score:string,tourname...

任何人都可以帮助我这里出了什么问题。我错过了什么吗?

最佳答案

这是因为PartitionFilters并且该行为是预期的。

当使用 partition by 保存 Parquet 文件中的数据时如果查询匹配某个分区 filter criteria ,Spark 仅读取与分区过滤器匹配的那些子目录,因此它不需要再次对数据应用该过滤器,因此这些列上根本不会有任何过滤器。

现在在你的情况下:

kaggleDf1.where($"country" === "England" && $"city" === "London")
PartitionFilters: [isnotnull(country#146), (country#146 = England)]
PushedFilters: [IsNotNull(city), EqualTo(city,London)]

Spark 只读取那些包含 country === "England" 的文件(因为您的数据在保存期间由 country 分区),因此不需要再次对数据应用该过滤器。除了 PartitionFilters 之外,您在任何地方都找不到此过滤器。 .

关于apache-spark - Spark 谓词下推未按预期工作,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/60795530/

相关文章:

c# - EF6 使用多个相同类型的表

apache-spark - 如何同时刷新表?

java - Apache Spark Java 设置内存大小

java - 我们可以在 Spark sql 中触发传统的连接查询吗

scala - Spark Dataframe - 如何从结构类型列中获取特定字段

mysql - 在mysql中管理行过期的最佳方法

java - 将流数据插入hive

apache-spark - Spark 是否会优化 pyspark 中相同但独立的 DAG?

apache-spark - 如何使用 Trigger.Once 选项在 Spark 3 Structured Streaming Kafka/File 源中配置背压

database - PostgreSQL多层分区