apache-spark - 如何将DataFrame的Spark sql表达式中的空值写入数据库表? (非法参数异常 : Can't get JDBC type for null)

标签 apache-spark apache-spark-sql

尝试运行以下命令时,我收到错误 java.lang.IllegalArgumentException: Can't get JDBC type for null 示例:

    ...
    val spark = SparkSession.builder
    .master("local[*]")
    .appName("Demo")
    .detOrCreate()

    import spark.implicits._

    //load first table
    val df_one = spark.read
    .format("jdbc")
    .option("url",myDbUrl)
    .option("dbtable",myTableOne)
    .option("user",myUser)
    .option("password",myPassw)
    .load()

    df_one.createGlobalTempView("table_one")

    //load second table
    val df_two = spark.read
    .format("jdbc")
    .option("url",myUrl)
    .option("dbtable",myTableTwo)
    .option("user",myUser)
    .option("password",myPassw)
    .load()

    df_two.createGlobalTempView("table_two")

    //perform join of two tables
    val df_result = spark.sql(
    "select o.field_one, t.field_two, null as field_three "+
    " from global_temp.table_one o, global_temp.table_two t where o.key_one = t.key_two"
    )

//Error there:
    df_result.write
    .format(jdbc)
    .option("dbtable",myResultTable)
    .option("url",myDbUrl)
    .option("user",myUser)
    .option("password",myPassw)
    .mode(SaveMode.Append)
    .save
    ...

我收到错误:

Exception in thread "main" java.lang.IllegalArgumentException: Can't get JDBC type for null
                at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$getJdbcType$2.apply(JdbcUtils.scala:148)
                at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$getJdbcType$2.apply(JdbcUtils.scala:148)
                at scala.Option.getOrElse(Option.scala:121)
                at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$getJdbcType(JdbcUtils.scala:147)
                at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$18.apply(JdbcUtils.scala:663)
                at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$18.apply(JdbcUtils.scala:662)
                at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
                at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
                at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
                at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
                at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
                at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:186)
                at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.saveTable(JdbcUtils.scala:662)
                at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:77)
                at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:426)
                at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:215)

解决方法,这会大大减慢工作流程:

...
    // create case class for DataSet
    case class ResultCaseClass(field_one: Option[Int], field_two: Option[Int], field_three: Option[Int])

    //perform join of two tables
    val ds_result = spark.sql(
    "select o.field_one, t.field_two, null as field_three "+
    " from global_temp.table_one o, global_temp.table_two t where o.key_one = t.key_two"
    )
.withColumn("field_one",$"field_one".cast(IntegerType))
.withColumn("field_two",$"field_two".cast(IntegerType))
.withColumn("field_three",$"field_three".cast(IntegerType))
.as[ResultCaseClass]

//Success:
    ds_result.write......
...

最佳答案

我遇到了和你一样的问题,然后我从java源代码中找到了错误信息。如果在没有指定数据类型的情况下将 null 值插入数据库,您将得到“Can't get JDBC type for null”。解决此问题的方法是将 null 转换为等于数据库字段类型的数据类型。

示例:

lit(null).cast(StringType) or lit(null).cast("string")

关于apache-spark - 如何将DataFrame的Spark sql表达式中的空值写入数据库表? (非法参数异常 : Can't get JDBC type for null),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/42925633/

相关文章:

java - 无法运行星火

python-2.7 - Pyspark 按另一个数据帧的列过滤数据帧

python - 为什么 PySpark 中的 agg() 一次只能汇总一列?

apache-spark - 为什么在Apache Spark SQL中列更改为可为空?

mongodb - 在scala中将dataframe转换为json

apache-spark - DSMS、Storm 和 Flink 之间的区别

r - 如何将 SparkR 数据框中的整数列转换为字符串?

scala - Spark 中的展平行

python - 如何在Spark中调用python脚本?

hadoop - HDFS保存数据的格式有哪些?