我想我不明白选择或删除是如何工作的。
我正在分解数据集,并且不希望将某些列复制到新生成的条目。
val ds = spark.sparkContext.parallelize(Seq(
("2017-01-01 06:15:00", "ASC_a", "1"),
("2017-01-01 06:19:00", "start", "2"),
("2017-01-01 06:22:00", "ASC_b", "2"),
("2017-01-01 06:30:00", "end", "2"),
("2017-01-01 10:45:00", "ASC_a", "3"),
("2017-01-01 10:50:00", "start", "3"),
("2017-01-01 11:22:00", "ASC_c", "4"),
("2017-01-01 11:31:00", "end", "5" )
)).toDF("timestamp", "status", "msg")
ds.show()
val foo = ds.select($"timestamp", $"msg")
val bar = ds.drop($"status")
foo.printSchema()
bar.printSchema()
println("foo " + foo.where($"status" === "end").count)
println("bar" + bar.where($"status" === "end").count)
输出:
root |-- timestamp: string (nullable = true) |-- msg: string (nullable = true) root |-- timestamp: string (nullable = true) |-- msg: string (nullable = true)
foo 2
bar 2
为什么我仍然得到 2 的输出,尽管我
a) 未选择状态
b) 掉落状态
编辑:
println("foo "+ foo.where(foo.col("status") === "end").count)
表示没有列状态。这应该与 println("foo "+ foo.where($"status"=== "end").count)
不同吗?
最佳答案
Why do I still get an output of 2 for both
因为优化器可以自由地重新组织执行计划。事实上,如果你检查一下:
== Physical Plan ==
*Project [_1#4 AS timestamp#8, _3#6 AS msg#10]
+- *Filter (isnotnull(_2#5) && (_2#5 = end))
+- *SerializeFromObject [staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(input[0, scala.Tuple3, true])._1, true) AS _1#4, staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(input[0, scala.Tuple3, true])._2, true) AS _2#5, staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(input[0, scala.Tuple3, true])._3, true) AS _3#6]
+- Scan ExternalRDDScan[obj#3]
您将看到过滤器被尽早下推并在项目之前执行。所以它相当于:
SELECT _1 AS timetsatmp, _2 AS msg
FROM ds WHERE _2 IS NOT NULL AND _2 = 'end'
可以说这是一个小错误,代码应该翻译为
SELECT * FROM (
SELECT _1 AS timetsatmp, _2 AS msg FROM ds
) WHERE _2 IS NOT NULL AND _2 = 'end'
并抛出异常。
关于scala - select/drop 并没有真正删除该列?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/45917131/