java - 如何用更高效的方法替换groupBy

标签 java apache-spark hadoop mapreduce

我的任务是使用 Apache Spark 分析肯尼迪航天中心日志。该代码可以工作,但我想摆脱 groupBy 操作,因为它的成本。

下面的代码收集带有 5xx 错误代码的请求列表并计算失败的请求。

我的代码

SparkSession session = SparkSession.builder().master("local").appName(application_name).getOrCreate();
JavaSparkContext jsc = new JavaSparkContext(session.sparkContext());
JavaRDD<LogEntry> input = jsc.textFile(hdfs_connect + args[0])
                .map(App::log_entry_extractor)
                .filter(Objects::nonNull);

Dataset<Row> dataSet = session.createDataFrame(input, LogEntry.class);

// task 1
dataSet.filter(col("returnCode").between(500, 599))
                .groupBy("request")
                .count()
                .select("request", "count")
//                .sort(desc("count"))
                .coalesce(1)
                .toJavaRDD()
                .saveAsTextFile(hdfs_connect + output_folder_task_1);

数据示例

199.72.81.55 - - [01/Jul/1995:00:00:01 -0400] "GET /history/apollo/ HTTP/1.0" 200 6245
unicomp6.unicomp.net - - [01/Jul/1995:00:00:06 -0400] "GET /shuttle/countdown/ HTTP/1.0" 200 3985
199.120.110.21 - - [01/Jul/1995:00:00:09 -0400] "GET /shuttle/missions/sts-73/mission-sts-73.html HTTP/1.0" 200 4085
burger.letters.com - - [01/Jul/1995:00:00:11 -0400] "GET /shuttle/countdown/liftoff.html HTTP/1.0" 304 0
199.120.110.21 - - [01/Jul/1995:00:00:11 -0400] "GET /shuttle/missions/sts-73/sts-73-patch-small.gif HTTP/1.0" 200 4179
burger.letters.com - - [01/Jul/1995:00:00:12 -0400] "GET /images/NASA-logosmall.gif HTTP/1.0" 304 0
burger.letters.com - - [01/Jul/1995:00:00:12 -0400] "GET /shuttle/countdown/video/livevideo.gif HTTP/1.0" 200 0
205.212.115.106 - - [01/Jul/1995:00:00:12 -0400] "GET /shuttle/countdown/countdown.html HTTP/1.0" 200 3985
d104.aa.net - - [01/Jul/1995:00:00:13 -0400] "GET /shuttle/countdown/ HTTP/1.0" 200 3985
129.94.144.152 - - [01/Jul/1995:00:00:13 -0400] "GET / HTTP/1.0" 200 7074

最佳答案

在此上下文中 groupBy 没有任何问题 - DataFrame / Dataset groupBy behaviour/optimization - 也没有真正可行的替代方案。

另一方面,

coalesce(1) 在大多数情况下是一种反模式,在最坏的情况下可以turn your process into a sequential one

However, if you're doing a drastic coalesce, e.g. to numPartitions = 1, this may result in your computation taking place on fewer nodes than you like (e.g. one node in the case of numPartitions = 1). To avoid this, you can call repartition. This will add a shuffle step, but means the current upstream partitions will be executed in parallel (per whatever the current partitioning is).

考虑用 repartition(1) 替换它或删除任何内容

关于java - 如何用更高效的方法替换groupBy,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/56575650/

相关文章:

mysql - sqoop 将本地 csv 导出到 mapreduce 上的 MySQL 错误

java - 将 HTML/CSS 设计的网站模板迁移到 google web 工具包?

java - 为什么我们不在单元测试中模拟域对象?

python - Pyspark 中的 None/== vs Null/isNull?

apache-spark - 无法将 spark 数据帧写入 gcs 存储桶

java - 分几步写入 HDFS 文件的效率如何?

hadoop - 打开连接后,gremlin外壳挂起

java - callgrind 等同于 java?

java - 为什么不能将 double 值转换为 Byte 对象?

hadoop - 无法使用 Oozie 部署 Spark 作业