java - 当我的spark作业出现内存不足错误时应该如何调试?

标签 java apache-spark hadoop yarn

运行spark作业时出现如下错误消息。

Container [pid=140679,containerID=some_container_id] is running beyond physical memory limits. Current usage: 2.3 GB of 2 GB physical memory used; 12.1 GB of 4.2 GB virtual memory used. Killing container.
Dump of the process-tree for some_container_id : 
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE 
|- 140679 140676 140679 140679 (bash) 0 0 118009856 333 /bin/bash -c //bin/java -server -XX:NewRatio=8 -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx200m -Xmx1638m -Dfile.encoding=UTF-8 -Djavax.security.auth.useSubjectCredsOnly=false -Djava.io.tmpdir=./tmp -Djava.io.tmpdir=/data9/nm-local-dir/usercache/appcache/some_application_id/some_container_id/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/data11/nm-log-dir/some_application_id/some_container_id -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 10.126.99.227 41953 attempt_1587100044137_3784279_m_000000_0 59373627899906 1>/data11/nm-log-dir/some_application_id/some_container_id/stdout 2>/data11/nm-log-dir/some_application_id/some_container_id/stderr 
|- 140824 140701 140679 140679 (1-script) 0 0 118013952 389 /bin/bash -ex ./some_code.sh params
|- 140861 140824 140679 140679 (java) 4082 485 10257293312 522833 /home1///search-env/package/jdk-1.7.0_80/bin/java -cp /data9/nm-local-dir/usercache//appcache/some_application_id/some_container_id/spark-2.0.2-bin-hadoop2.7//conf/:/data9/nm-local-dir/usercache//appcache/some_application_id/some_container_id/spark-2.0.2-bin-hadoop2.7/jars/*://search-cluster/name.180621/conf/hadoop/:/home1///search-env/package/hadoop-2.7.3.2.6.3.0-r5-centos7/etc/hadoop/://search-cluster/name.180621/env/hadoop-yarn/share/hadoop/common/lib/*://search-cluster/name.180621/env/hadoop-yarn/share/hadoop/common/*://search-cluster/name.180621/env/hadoop-yarn/share/hadoop/hdfs/://search-cluster/name.180621/env/hadoop-yarn/share/hadoop/hdfs/lib/*://search-cluster/name.180621/env/hadoop-yarn/share/hadoop/hdfs/*://search-cluster/name.180621/env/hadoop-yarn/share/hadoop/yarn/lib/*://search-cluster/name.180621/env/hadoop-yarn/share/hadoop/yarn/*:/home1///search-env/package/hadoop-2.7.3.2.6.3.0-r5-centos7/share/hadoop/mapreduce/lib/*:/home1///search-env/package/hadoop-2.7.3.2.6.3.0-r5-centos7/share/hadoop/mapreduce/*:/data9/nm-local-dir/usercache//appcache/some_application_id/some_container_id/:job.jar/job.jar:job.jar/classes/:job.jar/lib/*:/data9/nm-local-dir/usercache//appcache/some_application_id/some_container_id/*:null://search-cluster/name.180621/env/hadoop-yarn/contrib/capacity-scheduler/*.jar:/*.jar -Xmx8g -XX:+UseCompressedOops -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:MaxPermSize=256m org.apache.spark.deploy.SparkSubmit --master yarn --deploy-mode client --conf spark.eventLog.eventLog.enabled=true --conf spark.executor.extraJavaOptions=-XX:+UseCompressedOops -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC --conf spark.executor.memoryOverhead=3g --conf spark.yarn.am.cores=1 --conf spark.driver.extraJavaOptions=-XX:+UseCompressedOops -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps --conf spark.executor.cores=1 --conf spark.dynamicAllocation.enabled=false --conf spark.executor.instances=200 --conf spark.executor.memory=5g --conf spark.yarn.am.memory=8g --conf spark.eventLog.dir=hdfs://name/user//spark2/logs --conf spark.yarn.historyServer.address=http://discover-name.linecorp.com:11005//shs2/quicklinks/history_server.url --conf spark.driver.memory=8g --conf spark.yarn.maxAppAttempts=1 --conf spark.shuffle.service.enabled=false --class org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver --name 1-script.sh --jars /data9/nm-local-dir/usercache//appcache/some_application_id/some_container_id/spark-2.0.2-bin-hadoop2.7//jars/datanucleus-api-jdo-3.2.6.jar,/data9/nm-local-dir/usercache//appcache/some_application_id/some_container_id/spark-2.0.2-bin-hadoop2.7//jars/datanucleus-core-3.2.10.jar,/data9/nm-local-dir/usercache//appcache/some_application_id/some_container_id/spark-2.0.2-bin-hadoop2.7//jars/datanucleus-rdbms-3.2.9.jar,hive-third-functions-2.1.2-shaded.jar --files hive-site.xml#hive-site.xml --queue batch spark-internal -f file
|- 140701 140679 140679 140679 (java) 658 46 2492874752 66786 /home1///search-env/package/jdk-1.7.0_80/bin/java -server -XX:NewRatio=8 -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx200m -Xmx1638m -Dfile.encoding=UTF-8 -Djavax.security.auth.useSubjectCredsOnly=false -Djava.io.tmpdir=./tmp -Djava.io.tmpdir=/data9/nm-local-dir/usercache//appcache/some_application_id/some_container_id/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/data11/nm-log-dir/some_application_id/some_container_id -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 10.126.99.227 41953 attempt_1587100044137_3784279_m_000000_0 59373627899906 Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143.

我知道此错误是内存不足错误。但是我不知道我应该提供更多的内存以及应该提供多少内存。
也许我可以通过一一更改配置设置来找到它们。但是由于错误是间歇性的,因此很难找到是哪一个原因。

这是我设置的 Spark 存储器配置。
--conf spark.executor.memory=3g \
--conf spark.executor.memoryOverhead=5g \
--conf spark.yarn.driver.memoryOverhead=3g \
--conf spark.yarn.am.memory=16g \
--conf spark.driver.memory=16g \

我认为这份工作超过了其中一个或多个。有什么方法可以找到超出工作的内存限制?我什至想知道作业是否超出驱动程序内存限制之一或执行程序内存限制之一。

还有一个问题,在消息“当前使用率:2.3 GB所使用的2 GB物理内存;”中找不到“2GB”是什么意思。我没有在配置中指定“2GB”。这个值是多少?

最佳答案

如果没有记错的话,这是可用的hdfs内存的多余部分。您可以清空垃圾箱(hdfs帐户根目录中的.trash)。通常,它会在给定的时间内清洗一次。

关于java - 当我的spark作业出现内存不足错误时应该如何调试?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/62059182/

相关文章:

java.lang.RuntimeException : Unable to start service java. lang.NullPointerException

date - 由于时间戳记长度,从Spark到Elasticsearch写入日期时出错

java - 如何使用 JPA/Hibernate 选择 DDL 主键约束名称

apache-spark - python Spark : narrowing down most relevant features using PCA

apache-spark - Spark 或其他技术中的混合效应模型

shell - 使用Shell远程查询Hive

hadoop - 如何在没有 MapReduce 的情况下在 HBase 中进行分布式更新

maven - 创建用于提交Spark应用程序的瘦 jar

java - Java 中是否有 heredoc 替代方案(heredoc 作为 PHP)?

java - 如何通过java过滤文本文件并将结果保存到新文件