mysql - Spark 结构化流 : primary key in JDBC sink

标签 mysql apache-spark apache-spark-sql spark-structured-streaming apache-spark-dataset

我正在使用带更新模式的结构化流从 kafka 主题读取数据流,然后进行一些转换。

然后我创建了一个 jdbc sink,以 Append 模式将数据推送到 mysql sink 中。问题是我如何告诉我的接收器让它知道这是我的主键并根据它进行更新,以便我的表不应该有任何重复的行。

   val df: DataFrame = spark
  .readStream
  .format("kafka")
  .option("kafka.bootstrap.servers", "<List-here>")
  .option("subscribe", "emp-topic")
  .load()


  import spark.implicits._
  // value in kafka is bytes so cast it to String
  val empList: Dataset[Employee] = df.
  selectExpr("CAST(value AS STRING)")
  .map(row => Employee(row.getString(0)))

  // window aggregations on 1 min windows
  val aggregatedDf= ......

  // How to tell here that id is my primary key and do the update
  // based on id column
  aggregatedDf
  .writeStream
  .trigger(Trigger.ProcessingTime(60.seconds))
  .outputMode(OutputMode.Update)
  .foreachBatch { (batchDF: DataFrame, batchId: Long) =>
      batchDF
      .select("id", "name","salary","dept")
      .write.format("jdbc")
      .option("url", "jdbc:mysql://localhost/empDb")
      .option("driver","com.mysql.cj.jdbc.Driver")
      .option("dbtable", "empDf")
      .option("user", "root")
      .option("password", "root")
      .mode(SaveMode.Append)
      .save()
     }

最佳答案

一种方法是,您可以使用 ON DUPLICATE KEY UPDATEforeachPartition 可以达到这个目的

下面是伪代码片段

/**
    * Insert in to database using foreach partition.
    * @param dataframe : DataFrame
    * @param sqlDatabaseConnectionString
    * @param sqlTableName
    */
  def insertToTable(dataframe: DataFrame, sqlDatabaseConnectionString: String, sqlTableName: String): Unit = {

//numPartitions = number of simultaneous DB connections you can planning to give
datframe.repartition(numofpartitionsyouwant)

    val tableHeader: String = dataFrame.columns.mkString(",")
    dataFrame.foreachPartition { partition =>
      // Note : Each partition one connection (more better way is to use connection pools)
      val sqlExecutorConnection: Connection = DriverManager.getConnection(sqlDatabaseConnectionString)
      //Batch size of 1000 is used since some databases cant use batch size more than 1000 for ex : Azure sql
      partition.grouped(1000).foreach {
        group =>
          val insertString: scala.collection.mutable.StringBuilder = new scala.collection.mutable.StringBuilder()
          group.foreach {
            record => insertString.append("('" + record.mkString(",") + "'),")
          }

val sql =   s"""
               | INSERT INTO $sqlTableName  VALUES  
               | $tableHeader
               | ${insertString}
               | ON DUPLICATE KEY UPDATE 
               | yourprimarykeycolumn='${record.getAs[String]("key")}'
    sqlExecutorConnection.createStatement()
                .executeUpdate(sql)
          }
    sqlExecutorConnection.close() // close the connection
        }
      }

您可以使用 preparedstatement 代替 jdbc 语句。

进一步阅读:SPARK SQL - update MySql table using DataFrames and JDBC

关于mysql - Spark 结构化流 : primary key in JDBC sink,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55954996/

相关文章:

php - 自制 PHP mysql_conect

apache-spark - 带有 --files 参数错误的 PySpark spark-submit 命令

python - 如何在没有通用 key 的情况下在Apache Spark中合并两个数据帧?

mysql 行到列与 group_concat 和计数?

mysql - Mysql表中没有AVG的列表

mysql - 查询以匹配一列中的不同值,而该列中的不同值在单独的列中具有相同的值

java - Spark Java 堆空间

apache-spark - "Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary."

python - 包含 pyspark SQL : TypeError: 'Column' object is not callable

hadoop - Apache Spark : In SparkSql, 是易受 SQL 注入(inject)攻击的 sql