apache-kafka - Spark Streaming 应用程序因 KafkaException : String exceeds the maximum size or with IllegalArgumentException 而失败

标签 apache-kafka spark-streaming hadoop-yarn cloudera-cdh apache-spark-1.6

长话短说:

我非常简单的 Spark Streaming 应用程序在驱动程序中失败,并显示“KafkaException:字符串超出最大大小”。我在执行程序中看到了相同的异常,但我还在执行程序日志的某处发现了一个 IllegalArgumentException,其中没有其他信息

完整问题:

我正在使用 Spark Streaming 从 Kafka 主题中读取一些消息。 这就是我正在做的:

val conf = new SparkConf().setAppName("testName")
val streamingContext = new StreamingContext(new SparkContext(conf), Milliseconds(millis))
val kafkaParams = Map(
      "metadata.broker.list" -> "somevalidaddresshere:9092",
      "auto.offset.reset" -> "largest"
    )
val topics = Set("data")
val stream = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](
      streamingContext,
      kafkaParams,
      topics
    ).map(_._2) // only need the values not the keys

我对 Kafka 数据所做的只是使用以下方法打印它:

stream.print()

我的应用程序显然有比这更多的代码,但为了找到我的问题,我从代码中删除了所有可能的代码

我正在尝试在 YARN 上运行这段代码。 这是我的 spark 提交行:

./spark-submit --class com.somecompany.stream.MainStream --master yarn --deploy-mode cluster myjar.jar hdfs://some.hdfs.address.here/user/spark/streamconfig.properties

streamconfig.properties 文件只是一个常规属性文件,可能与这里的问题无关

在尝试执行应用程序后,它很快就失败了,驱动程序出现以下异常:

16/05/10 06:15:38 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, some.hdfs.address.here): kafka.common.KafkaException: String exceeds the maximum size of 32767.
    at kafka.api.ApiUtils$.shortStringLength(ApiUtils.scala:73)
    at kafka.api.TopicData$.headerSize(FetchResponse.scala:107)
    at kafka.api.TopicData.<init>(FetchResponse.scala:113)
    at kafka.api.TopicData$.readFrom(FetchResponse.scala:103)
    at kafka.api.FetchResponse$$anonfun$4.apply(FetchResponse.scala:170)
    at kafka.api.FetchResponse$$anonfun$4.apply(FetchResponse.scala:169)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
    at scala.collection.immutable.Range.foreach(Range.scala:141)
    at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251)
    at scala.collection.AbstractTraversable.flatMap(Traversable.scala:105)
    at kafka.api.FetchResponse$.readFrom(FetchResponse.scala:169)
    at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:135)
    at org.apache.spark.streaming.kafka.KafkaRDD$KafkaRDDIterator.fetchBatch(KafkaRDD.scala:192)
    at org.apache.spark.streaming.kafka.KafkaRDD$KafkaRDDIterator.getNext(KafkaRDD.scala:208)
    at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
    at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:308)
    at scala.collection.Iterator$class.foreach(Iterator.scala:727)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
    at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
    at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
    at scala.collection.AbstractIterator.to(Iterator.scala:1157)
    at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
    at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
    at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
    at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
    at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$28.apply(RDD.scala:1328)
    at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$28.apply(RDD.scala:1328)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1869)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1869)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:89)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

我什至没有在堆栈跟踪中看到我的代码

检查执行器,我发现了与驱动程序中相同的异常,但也深埋在以下异常中:

16/05/10 06:40:47 ERROR executor.Executor: Exception in task 0.0 in stage 2.0 (TID 8)
java.lang.IllegalArgumentException
    at java.nio.Buffer.limit(Buffer.java:275)
    at kafka.api.FetchResponsePartitionData$.readFrom(FetchResponse.scala:38)
    at kafka.api.TopicData$$anonfun$1.apply(FetchResponse.scala:100)
    at kafka.api.TopicData$$anonfun$1.apply(FetchResponse.scala:98)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
    at scala.collection.immutable.Range.foreach(Range.scala:141)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
    at scala.collection.AbstractTraversable.map(Traversable.scala:105)
    at kafka.api.TopicData$.readFrom(FetchResponse.scala:98)
    at kafka.api.FetchResponse$$anonfun$4.apply(FetchResponse.scala:170)
    at kafka.api.FetchResponse$$anonfun$4.apply(FetchResponse.scala:169)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
    at scala.collection.immutable.Range.foreach(Range.scala:141)
    at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251)
    at scala.collection.AbstractTraversable.flatMap(Traversable.scala:105)
    at kafka.api.FetchResponse$.readFrom(FetchResponse.scala:169)
    at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:135)
    at org.apache.spark.streaming.kafka.KafkaRDD$KafkaRDDIterator.fetchBatch(KafkaRDD.scala:192)
    at org.apache.spark.streaming.kafka.KafkaRDD$KafkaRDDIterator.getNext(KafkaRDD.scala:208)
    at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
    at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:308)
    at scala.collection.Iterator$class.foreach(Iterator.scala:727)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
    at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
    at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
    at scala.collection.AbstractIterator.to(Iterator.scala:1157)
    at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
    at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
    at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
    at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
    at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$28.apply(RDD.scala:1328)
    at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$28.apply(RDD.scala:1328)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1869)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1869)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:89)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

我不知道 IllegalArgument 是什么,因为没有包含任何信息

我的 YARN 使用的 Spark 版本是 1.6.0。我还验证了我的 pom 包含 Spark 1.6.0 而不是早期版本。我的范围是“提供”

我从完全相同的主题手动读取数据,那里的数据只是普通的 JSON。那里的数据根本不是很大。绝对小于 32767。而且我能够使用常规命令行使用者读取此数据,这很奇怪

遗憾的是谷歌搜索这个异常没有提供任何有用的信息

有没有人知道如何理解这里的问题到底是什么?

提前致谢

最佳答案

经过大量挖掘,我想我找到了问题所在。我在 YARN (1.6.0-cdh5.7.0) 上运行 Spark。 Cloudera 有新的 Kafka 客户端(0.9 版本),它与早期版本相比有一个内部协议(protocol)更改。但是,我们的 Kafka 是 0.8.2 版本。

关于apache-kafka - Spark Streaming 应用程序因 KafkaException : String exceeds the maximum size or with IllegalArgumentException 而失败,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/37131580/

相关文章:

apache-spark - Spark Streaming 中的数据接收

apache-spark - 将环境变量传递给 YARN 容器

Hadoop 资源管理

java - 使用 spring 管理 Kafka 主题

streaming - 99% 延迟和吞吐量的含义是什么

apache-kafka - 消费者如何在 Active 中设置 - Active Kafka 设置

scala - 如何在 Spark Streaming Scala 中对 HBase 进行单元测试

JSON 未从 yarn REST API 返回

apache-camel - 目的和区别 b/w Apache camel Kafka 消费者 URI 选项 consumerStreams 与 consumerCount

apache-kafka - 出现错误 :Batch containing 3 record(s) expired due to timeout while requesting metadata from brokers for test2R2P2-1