scala - spark sql圆圆的

标签 scala apache-spark apache-spark-sql

我很困惑在 spark sql 中 round 和 bround 是如何工作的。

scala> spark.sql("select round(1.5, 0), bround(1.5, 0)").show()
+-------------+--------------+
|round(1.5, 0)|bround(1.5, 0)|
+-------------+--------------+
|            2|             2|
+-------------+--------------+


scala> spark.sql("select round(2.5, 0), bround(2.5, 0)").show()
+-------------+--------------+
|round(2.5, 0)|bround(2.5, 0)|
+-------------+--------------+
|            3|             2|
+-------------+--------------+


scala> spark.sql("select round(3.5, 0), bround(3.5, 0)").show()
+-------------+--------------+
|round(3.5, 0)|bround(3.5, 0)|
+-------------+--------------+
|            4|             4|
+-------------+--------------+

最佳答案

  1. 圆形

Rounding mode to round towards {@literal "nearest neighbor"} unless both neighbors are equidistant, in which case round up. Behaves as for {@code RoundingMode.UP} if the discarded fraction is > 0.5; otherwise, behaves as for {@code RoundingMode.DOWN}. Note that this is the rounding mode commonly taught at school.

例子:

input=5.5 output=6
input=2.5 output=3
input=1.6 output=2
input=1.1 output=1
input=1.0 output=1
input=-1.0 output=-1
input=-1.1 output=-1
input=-1.6 output=-2
input=-2.5 output=-3
input=-5.5 output=-6
  1. 圆形

Rounding mode to round towards the {@literal "nearest neighbor"} unless both neighbors are equidistant, in which case, round towards the even neighbor. Behaves as for {@code RoundingMode.HALF_UP} if the digit to the left of the discarded fraction is odd; behaves as for {@code RoundingMode.HALF_DOWN} if it's even. Note that this is the rounding mode that statistically minimizes cumulative error when applied repeatedly over a sequence of calculations. It is sometimes known as {@literal "Banker's rounding,"} and is chiefly used in the USA. This rounding mode is analogous to the rounding policy used for {@code float} and {@code double} arithmetic in Java.

例子

    input=5.5 output=6
    input=2.5 output=2
    input=1.6 output=2
    input=1.1 output=1
    input=1.0 output=1
    input=-1.0 output=-1
    input=-1.1 output=-1
    input=-1.6 output=-2
    input=-2.5 output=-2
    input=-5.5 output=-6

关于scala - spark sql圆圆的,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/62373334/

相关文章:

scala - 将 Scala 案例类传输到 rdd.map func 中的 JsValue 但任务不可序列化

performance - Spark不忽略空分区

scala - 将 Dataframe 中的 spark 模式与类型 T 进行比较

python - pyspark 在远程机器上使用 mysql 数据库

scala - scala 枚举的 hashCode 在不同的 JVM (spark) 上是否相同?

scala - 在 SBT 中,我想用我的程序加载的文件应该放在哪里?

java - 无法使用 Scala 从 Cassandra DB 的原始数据类型映射读取数据

scala - 如何使用 chisel3 黑盒实例化 Xilinx 差分时钟缓冲器?

hadoop - Spark 1.2.1 编译汇编工程失败

java - DataFrame 在加入条件后找不到列名