java.io.IOException : Cannot initialize Cluster in Hadoop2 with YARN 异常

标签 java hadoop hadoop-yarn hadoop2

这是我第一次在 stackoverflow 上发帖,所以如果我做错了什么,我深表歉意。

我最近建立了一个新的 hadoop 集群,这是我第一次尝试使用 Hadoop 2 和 YARN。我目前在提交作业时遇到以下错误。

java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
    at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120)
    at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:82)
    at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:75)
    at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1255)
    at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1251)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
    at org.apache.hadoop.mapreduce.Job.connect(Job.java:1250)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:1279)
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)

这是我的配置文件:

mapred-site.xml

<configuration>
<!-- Site specific YARN configuration properties -->
    <property>
            <name>mapreduce.framework.name</name>
            <value>yarn</value>
    </property>
</configuration>

hdfs-site.xml

<configuration>
    <property>
            <name>dfs.name.dir</name>
            <value>/temp1/nn,/temp2/nn</value>
    </property>
    <property>
            <name>dfs.data.dir</name>
            <value>/temp1/dn,/temp2/dn</value>
    </property>
    <property>
            <name>fs.checkpoint.dir</name>
            <value>/temp1/snn</value>
    </property>
    <property>
            <name>dfs.permissions.supergroup</name>
            <value>hrdbms</value>
    </property>
    <property>
            <name>dfs.block.size</name>
            <value>268435456</value>
    </property>
</configuration>

yarn-site.xml

<configuration>

<!-- Site specific YARN configuration properties -->
    <property>
            <name>yarn.resourcemanager.hostname</name>
            <value>172.31.20.99</value>
    </property>
    <property>
            <name>yarn.nodemanager.local-dirs</name>
            <value>/temp1/y1,/temp2/y1</value>
    </property>
    <property>
            <name>yarn.nodemanager.log-dirs</name>
            <value>/temp1/y2,/temp2/y2</value>
    </property>
    <property>
            <name>yarn.nodemanager.aux-services</name>
            <value>mapreduce_shuffle</value>
    </property>
</configuration>

这是我的java代码:

            Configuration conf = new Configuration();
            conf.set("fs.hdfs.impl", org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
            conf.set("fs.file.impl", org.apache.hadoop.fs.LocalFileSystem.class.getName());
            conf.setBoolean("mapred.compress.map.output",true);
            conf.addResource(new org.apache.hadoop.fs.Path("/usr/local/hadoop-2.5.1/etc/hadoop/core-site.xml"));
            conf.addResource(new org.apache.hadoop.fs.Path("/usr/local/hadoop-2.5.1/etc/hadoop/hdfs-site.xml"));
            conf.addResource(new org.apache.hadoop.fs.Path("/usr/local/hadoop-2.5.1/etc/hadoop/yarn-site.xml"));
            conf.set("mapreduce.framework.name", "yarn");
            conf.setClass("mapred.map.output.compression.codec", org.apache.hadoop.io.compress.SnappyCodec.class, CompressionCodec.class);
            Job job = new Job(conf);
            job.setJarByClass(LoadMapper.class);
            job.setJobName("Load " + schema + "." + table);
            job.setMapperClass(LoadMapper.class);
            job.setReducerClass(LoadReducer.class);
            job.setOutputKeyClass(IntWritable.class);
            job.setOutputValueClass(ALOWritable.class);
            job.setMapOutputKeyClass(IntWritable.class);
            job.setMapOutputValueClass(ALOWritable.class);
            job.setNumReduceTasks(workerNodes.size());
            job.setOutputFormatClass(LoadOutputFormat.class);
            job.setReduceSpeculativeExecution(false);
            job.setMapSpeculativeExecution(false);
            String glob2 = glob.substring(6);
            FileInputFormat.addInputPath(job, new org.apache.hadoop.fs.Path(glob2));
            HRDBMSWorker.logger.debug("Submitting MR job");
            boolean allOK = job.waitForCompletion(true);

这是我启动 JVM 时所有的环境变量

HADOOP_DATANODE_OPTS=-Dhadoop.security.logger=ERROR,RFAS 
HOSTNAME=ip-172-31-20-103
HADOOP_IDENT_STRING=hrdbms
SHELL=/bin/bash
TERM=xterm
HADOOP_HOME=/usr/local/hadoop-2.5.1
HISTSIZE=1000
HADOOP_PID_DIR=
YARN_HOME=/usr/local/hadoop-2.5.1
USER=hrdbms
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.tbz=01;31:*.tbz2=01;31:*.bz=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:
HADOOP_SECURE_DN_PID_DIR=
HADOOP_SECURE_DN_LOG_DIR=/
MAIL=/var/spool/mail/hrdbms
PATH=/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hrdbms/bin
HADOOP_HDFS_HOME=/usr/local/hadoop-2.5.1
HADOOP_CLIENT_OPTS=-Xmx512m 
HADOOP_COMMON_HOME=/usr/local/hadoop-2.5.1
PWD=/home/hrdbms
JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.55.x86_64/jre
HADOOP_CLASSPATH=/home/hrdbms/HRDBMS.jar:/contrib/capacity-scheduler/*.jar
HADOOP_CONF_DIR=/etc/hadoop
LANG=en_US.UTF-8
HADOOP_PORTMAP_OPTS=-Xmx512m 
HADOOP_OPTS= -Djava.net.preferIPv4Stack=true
HADOOP_SECONDARYNAMENODE_OPTS=-Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender 
HISTCONTROL=ignoredups
SHLVL=1
HOME=/home/hrdbms
YARN_CONF_DIR=/etc/hadoop
HADOOP_SECURE_DN_USER=
HADOOP_NAMENODE_OPTS=-Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender 
HADOOP_MAPRED_HOME=/usr/local/hadoop-2.5.1
LOGNAME=hrdbms
HADOOP_NFS3_OPTS=
LESSOPEN=|/usr/bin/lesspipe.sh %s
HADOOP_YARN_USER=hrdbms
G_BROKEN_FILENAMES=1
_=/bin/env

这是客户端类路径中所有 jar 的列表

activation-1.1.jar
antlr-4.2.1-complete.jar
aopalliance-1.0.jar
apacheds-i18n-2.0.0-M15.jar
apacheds-kerberos-codec-2.0.0-M15.jar
api-asn1-api-1.0.0-M20.jar
api-util-1.0.0-M20.jar
asm-3.2.jar
avro-1.7.4.jar
commons-beanutils-1.7.0.jar
commons-beanutils-core-1.8.0.jar
commons-cli-1.2.jar
commons-codec-1.3.jar
commons-codec-1.4.jar
commons-collections-3.2.1.jar
commons-compress-1.4.1.jar
commons-configuration-1.6.jar
commons-daemon-1.0.13.jar
commons-digester-1.8.jar
commons-el-1.0.jar
commons-httpclient-3.1.jar
commons-io-2.4.jar
commons-lang-2.6.jar
commons-logging-1.1.3.jar
commons-math3-3.1.1.jar
commons-net-3.1.jar
guava-11.0.2.jar
guice-3.0.jar
guice-servlet-3.0.jar
hadoop-annotations-2.5.1.jar
hadoop-archives-2.5.1.jar
hadoop-auth-2.5.1.jar
hadoop-common-2.5.1-tests.jar
hadoop-common-2.5.1.jar
hadoop-datajoin-2.5.1.jar
hadoop-distcp-2.5.1.jar
hadoop-extras-2.5.1.jar
hadoop-gridmix-2.5.1.jar
hadoop-hdfs-2.5.1-tests.jar
hadoop-hdfs-2.5.1.jar
hadoop-hdfs-nfs-2.5.1.jar
hadoop-mapreduce-client-app-2.5.1.jar
hadoop-mapreduce-client-common-2.5.1.jar
hadoop-mapreduce-client-core-2.5.1.jar
hadoop-mapreduce-client-hs-2.5.1.jar
hadoop-mapreduce-client-hs-plugins-2.5.1.jar
hadoop-mapreduce-client-jobclient-2.5.1-tests.jar
hadoop-mapreduce-client-jobclient-2.5.1.jar
hadoop-mapreduce-client-shuffle-2.5.1.jar
hadoop-mapreduce-examples-2.5.1.jar
hadoop-nfs-2.5.1.jar
hadoop-openstack-2.5.1.jar
hadoop-rumen-2.5.1.jar
hadoop-sls-2.5.1.jar
hadoop-streaming-2.5.1.jar
hadoop-yarn-api-2.5.1.jar
hadoop-yarn-applications-distributedshell-2.5.1.jar
hadoop-yarn-applications-unmanaged-am-launcher-2.5.1.jar
hadoop-yarn-client-2.5.1.jar
hadoop-yarn-common-2.5.1.jar
hadoop-yarn-server-applicationhistoryservice-2.5.1.jar
hadoop-yarn-server-common-2.5.1.jar
hadoop-yarn-server-nodemanager-2.5.1.jar
hadoop-yarn-server-resourcemanager-2.5.1.jar
hadoop-yarn-server-tests-2.5.1.jar
hadoop-yarn-server-web-proxy-2.5.1.jar
hamcrest-core-1.3.jar
httpclient-4.2.5.jar
httpcore-4.2.5.jar
jackson-core-asl-1.9.13.jar
jackson-jaxrs-1.9.13.jar
jackson-mapper-asl-1.9.13.jar
jackson-xc-1.9.13.jar
jasper-compiler-5.5.23.jar
jasper-runtime-5.5.23.jar
java-xmlbuilder-0.4.jar
javax.inject-1.jar
jaxb-api-2.2.2.jar
jaxb-impl-2.2.3-1.jar
jersey-client-1.9.jar
jersey-core-1.9.jar
jersey-guice-1.9.jar
jersey-json-1.9.jar
jersey-server-1.9.jar
jets3t-0.9.0.jar
jettison-1.1.jar
jetty-6.1.26.jar
jetty-util-6.1.26.jar
jline-0.9.94.jar
jsch-0.1.50.jar
jsp-api-2.1.jar
jsr305-1.3.9.jar
junit-4.11.jar
leveldbjni-all-1.8.jar
log4j-1.2.17.jar
metrics-core-3.0.0.jar
mockito-all-1.8.5.jar
netty-3.6.2.Final.jar
paranamer-2.3.jar
preflight-app-1.8.7.jar
protobuf-java-2.5.0.jar
servlet-api-2.5.jar
slf4j-api-1.7.5.jar
slf4j-log4j12-1.7.5.jar
snappy-java-1.0.4.1.jar
stax-api-1.0-2.jar
xmlenc-0.52.jar
zookeeper-3.4.6.jar

求助!谢谢!

编辑:我刚刚找到这些调试日志消息。

2014-10-27 19:31:21,789 DEBUG 集群:尝试 ClientProtocolProvider:org.apache.hadoop.mapred.LocalClientProtocolProvider

2014-10-27 19:31:21,789 DEBUG 集群:无法选择 org.apache.hadoop.mapred.LocalClientProtocolProvider 作为 ClientProtocolProvider - 返回空协议(protocol)

最佳答案

我今天遇到了类似的问题。在我的例子中,我正在构建一个 über jar,其中一些依赖项(我还没有找到罪魁祸首)引入了一个 META-INF/services/org.apache.hadoop.mapreduce.protocol.ClientProtocolProvider 的内容:

org.apache.hadoop.mapred.LocalClientProtocolProvider

我在项目中提供了我自己的(例如,把它放在类路径中)如下:

org.apache.hadoop.mapred.YarnClientProtocolProvider

然后选出正确的那个。我怀疑你看到了类似的东西。要修复,请创建上述文件,并将其放在类路径中。如果我找到罪魁祸首 Jar,我会更新答案。

关于java.io.IOException : Cannot initialize Cluster in Hadoop2 with YARN 异常,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/26567223/

相关文章:

java - 如何使用 Java ResultSet 和 PreparedStatement 访问 mySQL 枚举字段

regex - 如何使用 Hive REGEXP_EXTRACT() 函数删除非字母数字或非数字字符

hadoop - 通过 php 运行简单的 Hadoop 命令

hadoop - mapred-site.xml 中 mapreduce.framework.name 的经典、本地有什么区别?

java - 将复选标记 "02713"UNIcode 符号转换为字符串

java - 如何为特定类编写 hashCode 方法?

hadoop - 如何使用 hadoop pig 流式传输已编译的 c 程序?

java - 如何设置 Spark 执行器的数量?

hadoop - yarn 组件

java - 在 Java 中跟踪递归方法时遇到问题