hadoop - 在hadoop中解析Spark驱动程序主机时出现错误

标签 hadoop apache-spark

我正在尝试针对Apache Hadoop 2.2.0 YARN集群运行Spark-1.0.1。两者都部署在我的Windows 7计算机上。当我尝试运行JavaSparkPI示例时,我正在Hadoop端解析异常。在Spark方面,所有参数看起来都不错,并且端口的5位数字之后没有多余的字符。有人可以帮忙吗...

Exception in thread "main" java.lang.NumberFormatException: For input string: "57831'"
    at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
    at java.lang.Integer.parseInt(Integer.java:492)
    at java.lang.Integer.parseInt(Integer.java:527)
    at scala.collection.immutable.StringLike$class.toInt(StringLike.scala:229)
    at scala.collection.immutable.StringOps.toInt(StringOps.scala:31)
    at org.apache.spark.util.Utils$.parseHostPort(Utils.scala:544)
    at org.apache.spark.deploy.yarn.ExecutorLauncher.waitForSparkMaster(ExecutorLauncher.scala:163)
    at org.apache.spark.deploy.yarn.ExecutorLauncher.run(ExecutorLauncher.scala:101)
    at org.apache.spark.deploy.yarn.ExecutorLauncher$$anonfun$main$1.apply$mcV$sp(ExecutorLauncher.scala:263)
    at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:53)
    at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:52)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
    at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:52)
    at org.apache.spark.deploy.yarn.ExecutorLauncher$.main(ExecutorLauncher.scala:262)
    at org.apache.spark.deploy.yarn.ExecutorLauncher.main(ExecutorLauncher.scala)


14/08/11 09:00:38 INFO yarn.Client: Command for starting the Spark ApplicationMaster:
List(%JAVA_HOME%/bin/java, -server, -Xmx512m, -Djava.io.tmpdir=%PWD%/tmp,
-Dspark.tachyonStore.folderName=\"spark-80c61976-f671-41b9-96a0-0c7c5c317fdb\",
-Dspark.yarn.secondary.jars=\"\",
-Dspark.driver.host=\"W01B62GR.UBSPROD.MSAD.UBS.NET\",
-Dspark.app.name=\"JavaSparkPi\",
-Dspark.jars=\"file:/N:/Nick/Spark/spark-1.0.1-bin-hadoop2/bin/../lib/spark-examples-1.0.1-hadoop2.2.0.jar\",
-Dspark.fileserver.uri=\"http://139.149.169.172:57836\",
-Dspark.executor.extraClassPath=\"N:\Nick\Spark\spark-1.0.1-bin-hadoop2\lib\spark-examples-1.0.1-hadoop2.2.0.jar\", 
-Dspark.master=\"yarn-client\", -Dspark.driver.port=\"57831\", 
-Dspark.driver.extraClassPath=\"N:\Nick\Spark\spark-1.0.1-bin-hadoop2\lib\spark-examples-1.0.1-hadoop2.2.0.jar\", 
-Dspark.httpBroadcast.uri=\"http://139.149.169.172:57835\", 
-Dlog4j.configuration=log4j-spark-container.properties,
org.apache.spark.deploy.yarn.ExecutorLauncher, --class, notused, --jar , null,  
--args  'W01B62GR.UBSPROD.MSAD.UBS.NET:57831' , 
--executor-memory, 1024, --executor-cores, 1, 
--num-executors , 2, 1>, <LOG_DIR>/stdout, 2>, <LOG_DIR>/stderr)

最佳答案

该错误看起来很清楚:57831'不是数字。 57831是。看你的论点:
'W01B62GR.UBSPROD.MSAD.UBS.NET:57831''不应该在那里。如果您的意思不是原来的参数,请显示命令行。我不确定这在没有Cygwin的Windows上是否可以使用。

关于hadoop - 在hadoop中解析Spark驱动程序主机时出现错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/25238908/

相关文章:

hadoop - 设置hadoop集群

java - 模拟 Hadoop 用户

bash - 查找端口号和域名以连接到 Hive 表

java - Scala - InvalidClassException : no valid constructor

apache-spark - k8s上的Spark-emptyDir未安装到目录

python - 如何在Hadoop中读取相应文件中的文件名和字数?

hadoop - 一致性在 HBase 中是如何工作的

python - lxml.Element 对象的 Spark Python RDD?

multithreading - 在一个内核中的Spark worker上启动多个处理器线程

hadoop - 配置单元中动态分区和静态分区的最佳做法是什么?