apache-spark - java.lang.AbstractMethodError,org.apache.spark.internal.Logging$class.initializeLogIfNecessary

标签 apache-spark apache-kafka spark-streaming cloudera-cdh

我正在 cdh 5.12 中运行 kafka 生产者和消费者代码以进行测试。当我尝试这样做时,我在运行消费者代码时遇到以下错误。

dataSet: org.apache.spark.sql.Dataset[(String, String)] = [key: string, value: string]
query: org.apache.spark.sql.streaming.StreamingQuery = org.apache.spark.sql.execution.streaming.StreamingQueryWrapper@109a5573
2018-10-25 10:08:37 ERROR MicroBatchExecution:91 - Query [id = 70bc4f7a-cc41-470d-afd0-d46e5aebf3db, runId = 4d974468-6c6b-47e5-976b-8b9aa98114e2] terminated with error
java.lang.AbstractMethodError
        at org.apache.spark.internal.Logging$class.initializeLogIfNecessary(Logging.scala:99)
        at org.apache.spark.sql.kafka010.KafkaSourceProvider$.initializeLogIfNecessary(KafkaSourceProvider.scala:369)
        at org.apache.spark.internal.Logging$class.log(Logging.scala:46)
        at org.apache.spark.sql.kafka010.KafkaSourceProvider$.log(KafkaSourceProvider.scala:369)
        at org.apache.spark.internal.Logging$class.logDebug(Logging.scala:58)
        at org.apache.spark.sql.kafka010.KafkaSourceProvider$.logDebug(KafkaSourceProvider.scala:369)
        at org.apache.spark.sql.kafka010.KafkaSourceProvider$ConfigUpdater.set(KafkaSourceProvider.scala:439)
        at org.apache.spark.sql.kafka010.KafkaSourceProvider$.kafkaParamsForDriver(KafkaSourceProvider.scala:394)
        at org.apache.spark.sql.kafka010.KafkaSourceProvider.createSource(KafkaSourceProvider.scala:90)
        at org.apache.spark.sql.execution.datasources.DataSource.createSource(DataSource.scala:277)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1$$anonfun$applyOrElse$1.apply(MicroBatchExecution.scala:80)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1$$anonfun$applyOrElse$1.apply(MicroBatchExecution.scala:77)
        at scala.collection.mutable.MapLike$class.getOrElseUpdate(MapLike.scala:194)
        at scala.collection.mutable.AbstractMap.getOrElseUpdate(Map.scala:80)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1.applyOrElse(MicroBatchExecution.scala:77)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1.applyOrElse(MicroBatchExecution.scala:75)
        at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)
        at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)
        at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
        at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:266)
        at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:272)
        at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:272)
        at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306)
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304)
        at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:272)
        at org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:256)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution.logicalPlan$lzycompute(MicroBatchExecution.scala:75)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution.logicalPlan(MicroBatchExecution.scala:61)
        at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:265)
        at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:189)
Exception in thread "stream execution thread for [id = 70bc4f7a-cc41-470d-afd0-d46e5aebf3db, runId = 4d974468-6c6b-47e5-976b-8b9aa98114e2]" java.lang.AbstractMethodError
        at org.apache.spark.internal.Logging$class.initializeLogIfNecessary(Logging.scala:99)
        at org.apache.spark.sql.kafka010.KafkaSourceProvider$.initializeLogIfNecessary(KafkaSourceProvider.scala:369)
        at org.apache.spark.internal.Logging$class.log(Logging.scala:46)
        at org.apache.spark.sql.kafka010.KafkaSourceProvider$.log(KafkaSourceProvider.scala:369)
        at org.apache.spark.internal.Logging$class.logDebug(Logging.scala:58)
        at org.apache.spark.sql.kafka010.KafkaSourceProvider$.logDebug(KafkaSourceProvider.scala:369)
        at org.apache.spark.sql.kafka010.KafkaSourceProvider$ConfigUpdater.set(KafkaSourceProvider.scala:439)
        at org.apache.spark.sql.kafka010.KafkaSourceProvider$.kafkaParamsForDriver(KafkaSourceProvider.scala:394)
        at org.apache.spark.sql.kafka010.KafkaSourceProvider.createSource(KafkaSourceProvider.scala:90)
        at org.apache.spark.sql.execution.datasources.DataSource.createSource(DataSource.scala:277)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1$$anonfun$applyOrElse$1.apply(MicroBatchExecution.scala:80)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1$$anonfun$applyOrElse$1.apply(MicroBatchExecution.scala:77)
        at scala.collection.mutable.MapLike$class.getOrElseUpdate(MapLike.scala:194)
        at scala.collection.mutable.AbstractMap.getOrElseUpdate(Map.scala:80)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1.applyOrElse(MicroBatchExecution.scala:77)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1.applyOrElse(MicroBatchExecution.scala:75)
        at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)
        at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)
        at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
        at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:266)
        at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:272)
        at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:272)
        at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306)
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304)
        at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:272)
        at org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:256)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution.logicalPlan$lzycompute(MicroBatchExecution.scala:75)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution.logicalPlan(MicroBatchExecution.scala:61)
        at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:265)
        at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:189)
org.apache.spark.sql.streaming.StreamingQueryException: Query [id = 70bc4f7a-cc41-470d-afd0-d46e5aebf3db, runId = 4d974468-6c6b-47e5-976b-8b9aa98114e2] terminated with exception: null
  at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:295)
  at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:189)
Caused by: java.lang.AbstractMethodError
  at org.apache.spark.internal.Logging$class.initializeLogIfNecessary(Logging.scala:99)
  at org.apache.spark.sql.kafka010.KafkaSourceProvider$.initializeLogIfNecessary(KafkaSourceProvider.scala:369)
  at org.apache.spark.internal.Logging$class.log(Logging.scala:46)
  at org.apache.spark.sql.kafka010.KafkaSourceProvider$.log(KafkaSourceProvider.scala:369)
  at org.apache.spark.internal.Logging$class.logDebug(Logging.scala:58)
  at org.apache.spark.sql.kafka010.KafkaSourceProvider$.logDebug(KafkaSourceProvider.scala:369)
  at org.apache.spark.sql.kafka010.KafkaSourceProvider$ConfigUpdater.set(KafkaSourceProvider.scala:439)
  at org.apache.spark.sql.kafka010.KafkaSourceProvider$.kafkaParamsForDriver(KafkaSourceProvider.scala:394)
  at org.apache.spark.sql.kafka010.KafkaSourceProvider.createSource(KafkaSourceProvider.scala:90)
  at org.apache.spark.sql.execution.datasources.DataSource.createSource(DataSource.scala:277)
  at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1$$anonfun$applyOrElse$1.apply(MicroBatchExecution.scala:80)
  at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1$$anonfun$applyOrElse$1.apply(MicroBatchExecution.scala:77)
  at scala.collection.mutable.MapLike$class.getOrElseUpdate(MapLike.scala:194)
  at scala.collection.mutable.AbstractMap.getOrElseUpdate(Map.scala:80)
  at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1.applyOrElse(MicroBatchExecution.scala:77)
  at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1.applyOrElse(MicroBatchExecution.scala:75)
  at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)
  at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)
  at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
  at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:266)
  at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:272)
  at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:272)
  at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306)
  at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
  at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304)
  at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:272)
  at org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:256)
  at org.apache.spark.sql.execution.streaming.MicroBatchExecution.logicalPlan$lzycompute(MicroBatchExecution.scala:75)
  at org.apache.spark.sql.execution.streaming.MicroBatchExecution.logicalPlan(MicroBatchExecution.scala:61)
  at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:265)

下面是我正在运行的 scala 代码:

import org.apache.kafka.clients.consumer.KafkaConsumer

import org.apache.kafka.clients.producer.{KafkaProducer, ProducerRecord}



val dataFrame = spark.readStream.format("kafka").option("kafka.bootstrap.servers","host:9093,host:9093,host:9093").option("kafka.security.protocol", "SASL_SSL").option("kafka.sasl.kerberos.service.name", "kafka").option("kafka.ssl.truststore.location","/opt/cloudera/security/jks/truststore.jks").option("kafka.ssl.truststore.password", "password").option("subscribe", "SampleTopic").load()

// dataFrame.writeStream.format("console").option("truncate","false").start().awaitTermination()

dataFrame.printSchema()

val dataSet =dataFrame.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)").as[(String, String)]
val query = dataSet.writeStream.outputMode("append").format("console").start()

query.awaitTermination()

下面是我运行的用于执行上述代码的命令:

spark2-shell --files /tmp/jaas.conf,/path/to/.keytab  --conf spark.executor.extraJavaOptions=-Djava.security.auth.login.config=/tmp/jaas.conf --conf spark.driver.extraJavaOptions=-Djava.security.auth.login.config=/tmp/jaas.conf --packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.2.0  -i /path/to/file.scala

谢谢

最佳答案

我遇到了类似的问题,结果发现问题是 Spark 版本与所使用的软件包版本之间不兼容。

对于您的情况 - 根据 cloudera doc cdh 5.12 附带 Spark 1.6,而 Spark 1.6 又需要 scala 2.10,而使用的包 org.apache.spark:spark-sql-kafka-0-10_2.11:2.2.0 使用 scala 2.11 编译。您可以尝试使用 org.apache.spark:spark-streaming-kafka_2.10:1.6.1 代替。

鸣谢:https://community.hortonworks.com/articles/197922/spark-23-structured-streaming-integration-with-apa.html

关于apache-spark - java.lang.AbstractMethodError,org.apache.spark.internal.Logging$class.initializeLogIfNecessary,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52993141/

相关文章:

apache-spark - 如何将整行传递给 UDF - Spark DataFrame 过滤器

hadoop - 如何使用 hadoop 自定义输入格式调整 Spark 应用程序

scala - 将映射键分解为列名

networking - Kafka Docker network_mode

apache-spark - 使用检查点 Spark Stream 的中流更改配置

apache-spark - 使用Delta Lake,如何在压缩后删除原始文件

scala - 将匹配案例应用于 Spark 列?

apache-kafka - 从主题的两个日期时间之间获取 Kafka 消息?

scala - Scala 案例类中 init 方法的 java.lang.NoSuchMethodException

java - Drools In Spark 流文件