scala - 如何为 Spark 结构化流编写 JDBC Sink [SparkException : Task not serializable]?

标签 scala apache-spark spark-structured-streaming

我的 spark 结构化流数据帧需要一个 JDBC 接收器。目前,据我所知,DataFrame 的 API 缺乏 writeStream到 JDBC 实现(既不在 PySpark 也不在 Scala(当前 Spark 版本 2.2.0))。我发现的唯一建议是写我自己的 ForeachWriter基于 this article 的 Scala 类.

所以,我修改了一个来自 here 的简单字数统计示例。通过添加自定义 ForeachWriter类并试图 writeStream到 PostgreSQL。单词流是从控制台手动生成的(使用 NetCat:nc -lk -p 9999)并由 Spark 从套接字读取。

不幸的是,我收到“任务不可序列化”错误。

APACHE_SPARK_VERSION=2.1.0
使用 Scala 版本 2.11.8(Java HotSpot(TM) 64 位服务器 VM,Java 1.8.0_112)

我的斯卡拉代码:

//Spark context available as 'sc' (master = local[*], app id = local-1501242382770).
//Spark session available as 'spark'.

import java.sql._
import org.apache.spark.sql.functions._
import org.apache.spark.sql.SparkSession

val spark = SparkSession
  .builder
  .master("local[*]")
  .appName("StructuredNetworkWordCountToJDBC")
  .config("spark.jars", "/tmp/data/postgresql-42.1.1.jar")
  .getOrCreate()

import spark.implicits._

val lines = spark.readStream
  .format("socket")
  .option("host", "localhost")
  .option("port", 9999)
  .load()

val words = lines.as[String].flatMap(_.split(" "))

val wordCounts = words.groupBy("value").count()

class JDBCSink(url: String, user:String, pwd:String) extends org.apache.spark.sql.ForeachWriter[org.apache.spark.sql.Row]{
    val driver = "org.postgresql.Driver"
    var connection:java.sql.Connection = _
    var statement:java.sql.Statement = _

    def open(partitionId: Long, version: Long):Boolean = {
        Class.forName(driver)
        connection = java.sql.DriverManager.getConnection(url, user, pwd)
        statement = connection.createStatement
        true
    }

    def process(value: org.apache.spark.sql.Row): Unit = {        
    statement.executeUpdate("INSERT INTO public.test(col1, col2) " +
                             "VALUES ('" + value(0) + "'," + value(1) + ");")
    }

    def close(errorOrNull:Throwable):Unit = {
        connection.close
    }
}

val url="jdbc:postgresql://<mypostgreserver>:<port>/<mydb>"
val user="<user name>"
val pwd="<pass>"
val writer = new JDBCSink(url, user, pwd)

import org.apache.spark.sql.streaming.ProcessingTime

val query=wordCounts
  .writeStream
  .foreach(writer)
  .outputMode("complete")
  .trigger(ProcessingTime("25 seconds"))
  .start()

query.awaitTermination()

错误信息:

ERROR StreamExecution: Query [id = ef2e7a4c-0d64-4cad-ad4f-91d349f8575b, runId = a86902e6-d168-49d1-b7e7-084ce503ea68] terminated with error
org.apache.spark.SparkException: Task not serializable
        at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:298)
        at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:288)
        at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:108)
        at org.apache.spark.SparkContext.clean(SparkContext.scala:2094)
        at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:924)
        at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:923)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
        at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:923)
        at org.apache.spark.sql.execution.streaming.ForeachSink.addBatch(ForeachSink.scala:49)
        at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatch$1.apply$mcV$sp(StreamExecution.scala:503)
        at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatch$1.apply(StreamExecution.scala:503)
        at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatch$1.apply(StreamExecution.scala:503)
        at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:262)
        at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:46)
        at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runBatch(StreamExecution.scala:502)
        at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches$1$$anonfun$1.apply$mcV$sp(StreamExecution.scala:255)
        at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches$1$$anonfun$1.apply(StreamExecution.scala:244)
        at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches$1$$anonfun$1.apply(StreamExecution.scala:244)
        at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:262)
        at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:46)
        at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches$1.apply$mcZ$sp(StreamExecution.scala:244)
        at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:43)
        at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches(StreamExecution.scala:239)
        at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:177)
Caused by: java.io.NotSerializableException: org.apache.spark.sql.execution.streaming.StreamExecution
Serialization stack:
        - object not serializable (class: org.apache.spark.sql.execution.streaming.StreamExecution, value: Streaming Query [id = 9b01db99-9120-4047-b779-2e2e0b289f65, runId = e20beefa-146a-4139-96f9-de3d64ce048a] [state = TERMINATED])
        - field (class: $line21.$read$$iw$$iw, name: query, type: interface org.apache.spark.sql.streaming.StreamingQuery)
        - object (class $line21.$read$$iw$$iw, $line21.$read$$iw$$iw@24747e0f)
        - field (class: $line21.$read$$iw, name: $iw, type: class $line21.$read$$iw$$iw)
        - object (class $line21.$read$$iw, $line21.$read$$iw@1814ed19)
        - field (class: $line21.$read, name: $iw, type: class $line21.$read$$iw)
        - object (class $line21.$read, $line21.$read@13e62f5d)
        - field (class: $line25.$read$$iw, name: $line21$read, type: class $line21.$read)
        - object (class $line25.$read$$iw, $line25.$read$$iw@14240e5c)
        - field (class: $line25.$read$$iw$$iw, name: $outer, type: class $line25.$read$$iw)
        - object (class $line25.$read$$iw$$iw, $line25.$read$$iw$$iw@11e4c6f5)
        - field (class: $line25.$read$$iw$$iw$JDBCSink, name: $outer, type: class $line25.$read$$iw$$iw)
        - object (class $line25.$read$$iw$$iw$JDBCSink, $line25.$read$$iw$$iw$JDBCSink@6c096c84)
        - field (class: org.apache.spark.sql.execution.streaming.ForeachSink, name: org$apache$spark$sql$execution$streaming$ForeachSink$$writer, type: class org.apache.spark.sql.ForeachWriter)
        - object (class org.apache.spark.sql.execution.streaming.ForeachSink, org.apache.spark.sql.execution.streaming.ForeachSink@6feccb75)
        - field (class: org.apache.spark.sql.execution.streaming.ForeachSink$$anonfun$addBatch$1, name: $outer, type: class org.apache.spark.sql.execution.streaming.ForeachSink)
        - object (class org.apache.spark.sql.execution.streaming.ForeachSink$$anonfun$addBatch$1, <function1>)
        at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40)
        at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:46)
        at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:100)
        at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:295)
        ... 25 more

如何使它工作?

解决方案

(感谢所有人,特别感谢@zsxwing 提供了一个简单的解决方案):
  • 将 JDBCSink 类保存到文件中。
  • 在 spark-shell 中加载一个类 f.eg.使用 scala> :load <path_to_a_JDBCSink.scala_file>
  • 最后scala> :paste没有 JDBCSink 类定义的代码。
  • 最佳答案

    只需在单独的文件中定义 JDBCSink,而不是将其定义为可以捕获外部引用的内部类。

    关于scala - 如何为 Spark 结构化流编写 JDBC Sink [SparkException : Task not serializable]?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/45373795/

    相关文章:

    hadoop - 我是否必须安装 Hadoop 才能使用 Elasticsearch ES-Hadoop 连接器

    pyspark - 如何使用PySpark结构流+Kafka

    apache-spark - Spark 结构化流给我错误 org.apache.spark.sql.AnalysisException : 'foreachBatch' does not support partitioning;

    algorithm - 在 Scala 中识别具有公共(public)元素的命名集的有效方法

    java - 数据库在等待空闲可用连接时超时

    java - org.json.XML 转换 json 到 xml 到 json 失败

    java - 在 Scala 中将字符串转换为整数的最简洁方法?

    mysql - Spark 存在错误时丢弃 Hive 表

    apache-spark - hive / Spark 中的转换逻辑

    scala - 控制结构化 Spark Streaming 的微批处理