hadoop - Spark 在查询 Hadoop 时给出 InvalidProtocolBufferException

标签 hadoop apache-spark

在 Hadoop 2 名称节点上的 Spark (0.9.1) shell 中运行此示例

scala> val file1 = sc.textFile("hdfs://testhadoopname1.myserver.com:9000/user/ubuntu/events/datepart=2014-04-11/2014-04-11-09-42.txt")
14/04/16 10:27:01 INFO storage.MemoryStore: ensureFreeSpace(74968) called with curMem=302142, maxMem=311387750
14/04/16 10:27:01 INFO storage.MemoryStore: Block broadcast_3 stored as values to memory (estimated size 73.2 KB, free 296.6 MB)
file1: org.apache.spark.rdd.RDD[String] = MappedRDD[7] at textFile at :12
scala> file1.count()

为什么我会得到这个?

java.io.IOException: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message contained an invalid tag (zero).; Host Details : local host is: "testhadoopname1.myserver.com/10.255.187.229"; destination host is: "testhadoopname1.myserver.com":9000;
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
    at org.apache.hadoop.ipc.Client.call(Client.java:1351)
    at org.apache.hadoop.ipc.Client.call(Client.java:1300)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
    at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:622)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:651)
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1679)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1106)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)
    at org.apache.hadoop.fs.FileSystem.globStatusInternal(FileSystem.java:1701)
    at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1647)
    at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:222)
    at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:270)
    at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:140)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:207)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:205)
    at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:207)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:205)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:898)
    at org.apache.spark.rdd.RDD.count(RDD.scala:726)
    at $iwC$$iwC$$iwC$$iwC.(:15)
    at $iwC$$iwC$$iwC.(:20)
    at $iwC$$iwC.(:22)
    at $iwC.(:24)
    at (:26)
    at .(:30)
    at .()
    at .(:7)
    at .()
    at $print()
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:622)
    at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:772)
    at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1040)
    at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:609)
    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:640)
    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:604)
    at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:793)
    at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:838)
    at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:750)
    at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:598)
    at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:605)
    at org.apache.spark.repl.SparkILoop.loop(SparkILoop.scala:608)
    at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply$mcZ$sp(SparkILoop.scala:931)
    at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:881)
    at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:881)
    at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
    at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:881)
    at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:973)
    at org.apache.spark.repl.Main$.main(Main.scala:31)
    at org.apache.spark.repl.Main.main(Main.scala)
Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol message contained an invalid tag (zero).
    at com.google.protobuf.InvalidProtocolBufferException.invalidTag(InvalidProtocolBufferException.java:89)
    at com.google.protobuf.CodedInputStream.readTag(CodedInputStream.java:108)
    at org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto.(RpcHeaderProtos.java:1398)
    at org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto.(RpcHeaderProtos.java:1362)
    at org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto$1.parsePartialFrom(RpcHeaderProtos.java:1492)
    at org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto$1.parsePartialFrom(RpcHeaderProtos.java:1487)
    at com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:200)
    at com.google.protobuf.AbstractParser.parsePartialDelimitedFrom(AbstractParser.java:241)
    at com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:253)
    at com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:259)
    at com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:49)
    at org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto.parseDelimitedFrom(RpcHeaderProtos.java:2364)
    at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:996)
    at org.apache.hadoop.ipc.Client$Connection.run(Client.java:891)

最佳答案

问题是我使用的是不合格的 Hadoop 版本(2.0.3-alpha 版本)。我使用 Spark 0.9.1 独立 clustermanager 在最新的 Hadoop 2.3.0 上重新创建了 HDFS 集群,并在 Spark-shell 中运行了上面的示例......正如所宣传的那样。

关于hadoop - Spark 在查询 Hadoop 时给出 InvalidProtocolBufferException,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/23116678/

相关文章:

scala - Spark 合并错误 - 没有这样的方法

hadoop - HBASE(导入数据)

hadoop - Hive 查询生成管理不善的登台目录

hadoop - 我可以区分Hive中的每一列吗?

scala - 我如何处理 Spark scala 中的模式匹配中的空值

scala - Cassandra 连接器 Apache Spark : local class incompatible

hadoop - 安装 Mapr 时出错

hadoop - 如何在 Spark 中处理 Map<Key,value>?

apache-spark - Zeppelin更改端口已被Spark Master使用

scala - Apache Spark中的混洗与非混洗合并