apache-spark - Spark 是否会优化 pyspark 中相同但独立的 DAG?

标签 apache-spark pyspark

考虑以下 pyspark 代码

def transformed_data(spark):
    df = spark.read.json('data.json')
    df = expensive_transformation(df)  # (A)    
    return df


df1 = transformed_data(spark)
df = transformed_data(spark)

df1 = foo_transform(df1)
df = bar_transform(df)

return df.join(df1)

我的问题是:transformed_data 上定义为 (A) 的操作是否在 final_view 中进行了优化,以便仅执行一次?

请注意,此代码不等同于

df1 = transformed_data(spark)
df = df1

df1 = foo_transform(df1)
df = bar_transform(df)

df.join(df1)

(至少从 Python 的角度来看,在本例中为 id(df1) = id(df)。

更广泛的问题是:spark 在优化两个相等的 DAG 时会考虑什么:DAG(由其边和节点定义)是否相等,或者它们的对象 ID (df = df1)相等吗?

最佳答案

有点。它依赖于 Spark 有足够的信息来推断依赖性。

例如,我按照描述复制了您的示例:

from pyspark.sql.functions import hash
def f(spark, filename):
    df=spark.read.csv(filename)
    df2=df.select(hash('_c1').alias('hashc2'))
    df3=df2.select(hash('hashc2').alias('hashc3'))
    df4=df3.select(hash('hashc3').alias('hashc4'))
    return df4

filename = 'some-valid-file.csv'
df_a = f(spark, filename)
df_b = f(spark, filename)
assert df_a != df_b

df_joined = df_a.join(df_b, df_a.hashc4==df_b.hashc4, how='left')

如果我使用 df_joined.explain(extended=True) 解释生成的数据帧,我会看到以下四个计划:

== Parsed Logical Plan ==
Join LeftOuter, (hashc4#20 = hashc4#42)
:- Project [hash(hashc3#18, 42) AS hashc4#20]
:  +- Project [hash(hashc2#16, 42) AS hashc3#18]
:     +- Project [hash(_c1#11, 42) AS hashc2#16]
:        +- Relation[_c0#10,_c1#11,_c2#12] csv
+- Project [hash(hashc3#40, 42) AS hashc4#42]
   +- Project [hash(hashc2#38, 42) AS hashc3#40]
      +- Project [hash(_c1#33, 42) AS hashc2#38]
         +- Relation[_c0#32,_c1#33,_c2#34] csv
== Analyzed Logical Plan ==
hashc4: int, hashc4: int
Join LeftOuter, (hashc4#20 = hashc4#42)
:- Project [hash(hashc3#18, 42) AS hashc4#20]
:  +- Project [hash(hashc2#16, 42) AS hashc3#18]
:     +- Project [hash(_c1#11, 42) AS hashc2#16]
:        +- Relation[_c0#10,_c1#11,_c2#12] csv
+- Project [hash(hashc3#40, 42) AS hashc4#42]
   +- Project [hash(hashc2#38, 42) AS hashc3#40]
      +- Project [hash(_c1#33, 42) AS hashc2#38]
         +- Relation[_c0#32,_c1#33,_c2#34] csv
== Optimized Logical Plan ==
Join LeftOuter, (hashc4#20 = hashc4#42)
:- Project [hash(hash(hash(_c1#11, 42), 42), 42) AS hashc4#20]
:  +- Relation[_c0#10,_c1#11,_c2#12] csv
+- Project [hash(hash(hash(_c1#33, 42), 42), 42) AS hashc4#42]
   +- Relation[_c0#32,_c1#33,_c2#34] csv
== Physical Plan ==
SortMergeJoin [hashc4#20], [hashc4#42], LeftOuter
:- *(2) Sort [hashc4#20 ASC NULLS FIRST], false, 0
:  +- Exchange hashpartitioning(hashc4#20, 200)
:     +- *(1) Project [hash(hash(hash(_c1#11, 42), 42), 42) AS hashc4#20]
:        +- *(1) FileScan csv [_c1#11] Batched: false, Format: CSV, Location: InMemoryFileIndex[file: some-valid-file.csv], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<_c1:string>
+- *(4) Sort [hashc4#42 ASC NULLS FIRST], false, 0
   +- ReusedExchange [hashc4#42], Exchange hashpartitioning(hashc4#20, 200)

physical plan上面只读取 CSV 一次并重复使用所有计算,因为 Spark 检测到两个 FileScan 是相同的(即 Spark 知道它们不是独立的)。

现在考虑是否用手工制作的独立但相同的 RDD 替换 read.csv

from pyspark.sql.functions import hash
def g(spark):
    df=spark.createDataFrame([('a', 'a'), ('b', 'b'), ('c', 'c')], ["_c1", "_c2"])
    df2=df.select(hash('_c1').alias('hashc2'))
    df3=df2.select(hash('hashc2').alias('hashc3'))
    df4=df3.select(hash('hashc3').alias('hashc4'))
    return df4

df_c = g(spark)
df_d = g(spark)
df_joined = df_c.join(df_d, df_c.hashc4==df_d.hashc4, how='left')

在这种情况下,Spark的物理计划扫描两个不同的RDD。以下是运行 df_joined.explain(extended=True) 的输出以进行确认。

== Parsed Logical Plan ==
Join LeftOuter, (hashc4#8 = hashc4#18)
:- Project [hash(hashc3#6, 42) AS hashc4#8]
:  +- Project [hash(hashc2#4, 42) AS hashc3#6]
:     +- Project [hash(_c1#0, 42) AS hashc2#4]
:        +- LogicalRDD [_c1#0, _c2#1], false
+- Project [hash(hashc3#16, 42) AS hashc4#18]
   +- Project [hash(hashc2#14, 42) AS hashc3#16]
      +- Project [hash(_c1#10, 42) AS hashc2#14]
         +- LogicalRDD [_c1#10, _c2#11], false

== Analyzed Logical Plan ==
hashc4: int, hashc4: int
Join LeftOuter, (hashc4#8 = hashc4#18)
:- Project [hash(hashc3#6, 42) AS hashc4#8]
:  +- Project [hash(hashc2#4, 42) AS hashc3#6]
:     +- Project [hash(_c1#0, 42) AS hashc2#4]
:        +- LogicalRDD [_c1#0, _c2#1], false
+- Project [hash(hashc3#16, 42) AS hashc4#18]
   +- Project [hash(hashc2#14, 42) AS hashc3#16]
      +- Project [hash(_c1#10, 42) AS hashc2#14]
         +- LogicalRDD [_c1#10, _c2#11], false

== Optimized Logical Plan ==
Join LeftOuter, (hashc4#8 = hashc4#18)
:- Project [hash(hash(hash(_c1#0, 42), 42), 42) AS hashc4#8]
:  +- LogicalRDD [_c1#0, _c2#1], false
+- Project [hash(hash(hash(_c1#10, 42), 42), 42) AS hashc4#18]
   +- LogicalRDD [_c1#10, _c2#11], false

== Physical Plan ==
SortMergeJoin [hashc4#8], [hashc4#18], LeftOuter
:- *(2) Sort [hashc4#8 ASC NULLS FIRST], false, 0
:  +- Exchange hashpartitioning(hashc4#8, 200)
:     +- *(1) Project [hash(hash(hash(_c1#0, 42), 42), 42) AS hashc4#8]
:        +- Scan ExistingRDD[_c1#0,_c2#1]
+- *(4) Sort [hashc4#18 ASC NULLS FIRST], false, 0
   +- Exchange hashpartitioning(hashc4#18, 200)
      +- *(3) Project [hash(hash(hash(_c1#10, 42), 42), 42) AS hashc4#18]
         +- Scan ExistingRDD[_c1#10,_c2#11]

这并不是 PySpark 特有的行为。

关于apache-spark - Spark 是否会优化 pyspark 中相同但独立的 DAG?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55148432/

相关文章:

apache-spark - Spark 流: connection refused

sql - Hadoop 层次结构之谜

python - pyspark:将稀疏局部矩阵转换为 RDD

python-3.x - 删除启动消息以更改 Spark 日志级别

apache-spark - 如何读取所有以辅音开头的csv文件?

hadoop - 在不创建 _temporary 文件夹的情况下将 Spark 数据帧作为 Parquet 写入 S3

scala - 对于 DStream 中的每个 RDD,如何将其转换为数组或其他一些典型的 Java 数据类型?

scala - 错误 : not found: value sc

apache-spark - 将自定义函数应用于 spark 数据框组

apache-spark - 用户定义的函数要应用于 PySpark 中的 Window?