Scala 错误 Spark Streaming Kafka : ambiguous reference to overloaded definition

标签 scala apache-spark spark-streaming

我正在尝试创建 kafka direct 流,并在 Spark 流模块中从外部提供偏移量,但它会导致以下编译错误。

这是创建Kafka直接流的代码

val kafkaParams = Map("metadata.broker.list" -> "kafka.brokers")
// testing only
val fromOffsets: Map[TopicPartition, Long] = Map[TopicPartition, Long]()

val kafkaStream = KafkaUtils.createDirectStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, Array[Byte]]
    (ssc, kafkaParams, fromOffsets, (mmd: MessageAndMetadata[Array[Byte], Array[Byte]]) => mmd.message())

这是我遇到的编译错误。有什么想法/指示吗?

    ambiguous reference to overloaded definition,
both method createDirectStream in object KafkaUtils of type (jssc: org.apache.spark.streaming.api.java.JavaStreamingContext, keyClass: Class[Array[Byte]], valueClass: Class[Array[Byte]], keyDecoderClass: Class[kafka.serializer.DefaultDecoder], valueDecoderClass: Class[kafka.serializer.DefaultDecoder], recordClass: Class[Array[Byte]], kafkaParams: java.util.Map[String,String], fromOffsets: java.util.Map[kafka.common.TopicAndPartition,Long], messageHandler: org.apache.spark.api.java.function.Function[kafka.message.MessageAndMetadata[Array[Byte],Array[Byte]],Array[Byte]])org.apache.spark.streaming.api.java.JavaInputDStream[Array[Byte]]
and  method createDirectStream in object KafkaUtils of type (ssc: org.apache.spark.streaming.StreamingContext, kafkaParams: Map[String,String], fromOffsets: Map[kafka.common.TopicAndPartition,Long], messageHandler: kafka.message.MessageAndMetadata[Array[Byte],Array[Byte]] => Array[Byte])(implicit evidence$14: scala.reflect.ClassTag[Array[Byte]], implicit evidence$15: scala.reflect.ClassTag[Array[Byte]], implicit evidence$16: scala.reflect.ClassTag[kafka.serializer.DefaultDecoder], implicit evidence$17: scala.reflect.ClassTag[kafka.serializer.DefaultDecoder], implicit evidence$18: scala.reflect.ClassTag[Array[Byte]])org.apache.spark.streaming.dstream.InputDStream[Array[Byte]]
match expected type ?
[ERROR]     val kafkaStream = KafkaUtils.createDirectStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, Array[Byte]]

最佳答案

请使用kafka.common.TopicAndPartition而不是org.apache.kafka.common.TopicPartition

关于Scala 错误 Spark Streaming Kafka : ambiguous reference to overloaded definition,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/37624110/

相关文章:

python - 文本列上的 Pyspark DataFrame UDF

scala - 如何将流式数据写入S3?

scala - 从 Intellij 中的代码引用 txt 文件

apache-spark - 来自具有正确可空性的案例类的 Spark 模式

java - PlayDependencyClassLoader 的类路径

apache-spark - Hadoop/ Spark : How replication factor and performance are related?

maven - Spark Streaming + json4s-jackson 依赖问题

apache-spark - 单个日志文件的 Spark 累积处理

java - 仅在 Spark 中可见的 Shapeless 中的 NoSuchMethodError

scala - 如何在 Scala 中使我的不可变二叉搜索树通用?