scala - 不支持的文字类型类 scala.runtime.BoxedUnit

标签 scala apache-spark-sql datastax databricks

我正在尝试过滤从 oracle 读取的数据帧的列,如下所示

import org.apache.spark.sql.functions.{col, lit, when}

val df0  =  df_org.filter(col("fiscal_year").isNotNull())

当我这样做时,我收到以下错误:
java.lang.RuntimeException: Unsupported literal type class scala.runtime.BoxedUnit ()
at org.apache.spark.sql.catalyst.expressions.Literal$.apply(literals.scala:77)
at org.apache.spark.sql.catalyst.expressions.Literal$$anonfun$create$2.apply(literals.scala:163)
at org.apache.spark.sql.catalyst.expressions.Literal$$anonfun$create$2.apply(literals.scala:163)
at scala.util.Try.getOrElse(Try.scala:79)
at org.apache.spark.sql.catalyst.expressions.Literal$.create(literals.scala:162)
at org.apache.spark.sql.functions$.typedLit(functions.scala:113)
at org.apache.spark.sql.functions$.lit(functions.scala:96)
at org.apache.spark.sql.Column.apply(Column.scala:212)
at com.snp.processors.BenchmarkModelValsProcessor2.process(BenchmarkModelValsProcessor2.scala:80)
at com.snp.utils.Utils$$anonfun$getAllDefinedProcessors$1.apply(Utils.scala:30)
at com.snp.utils.Utils$$anonfun$getAllDefinedProcessors$1.apply(Utils.scala:30)
at com.sp.MigrationDriver$$anonfun$main$6$$anonfun$apply$2.apply(MigrationDriver.scala:140)
at com.sp.MigrationDriver$$anonfun$main$6$$anonfun$apply$2.apply(MigrationDriver.scala:140)
at scala.Option.map(Option.scala:146)
at com.sp.MigrationDriver$$anonfun$main$6.apply(MigrationDriver.scala:138)
at com.sp.MigrationDriver$$anonfun$main$6.apply(MigrationDriver.scala:135)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.MapLike$DefaultKeySet.foreach(MapLike.scala:174)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at com.sp.MigrationDriver$.main(MigrationDriver.scala:135)
at com.sp.MigrationDriver.main(MigrationDriver.scala)

知道我在这里做错了什么以及如何解决这个问题吗?

最佳答案

只需删除函数上的括号:

从:val df0 = df_org.filter(col("fiscal_year").isNotNull())到:val df0 = df_org.filter(col("fiscal_year").isNotNull)

关于scala - 不支持的文字类型类 scala.runtime.BoxedUnit,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53374838/

相关文章:

cassandra - 如何使用 datastax java 驱动程序将 timeuuid 插入 cassandra 或 TimeUUID 版本无效

java - 如何扩展 cassandra SessionManager 以进行 Instrumentation

bash - Docker 上的 DataStax Enterprise : fails to start due to/hadoop/conf directory not being writable

scala - 在Play 2(Scala)中创建自定义字段构造函数

java - 如何将 csv 字符串转换为 Spark-ML 兼容的 Dataset<Row> 格式?

xml - 在单元测试中比较 scala.xml.Elem 对象

sql - Apache Spark 中的授权

python - 将整数列添加到 PySpark 数据框中的时间戳列

scala - 如何操作 Poly1 中的 FieldTypes?

scala - => 位于 [] 内是什么意思?