java - 如何更新/删除 Spark-hive 中的数据?

标签 java scala hive apache-spark-sql spark-hive

我认为我的标题无法解释问题,所以问题是:

详细信息build.sbt:

name := "Hello"
scalaVersion := "2.11.8"
version      := "1.0"

libraryDependencies += "org.apache.spark" %% "spark-core" % "2.1.0"
libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.1.0"
libraryDependencies += "org.apache.spark" % "spark-hive_2.11" % "2.1.0"

代码:

val sparkSession = SparkSession.builder().enableHiveSupport().appName("HiveOnSpark").master("local").getOrCreate()
val hiveql : HiveContext  = new HiveContext(sparkSession.sparkContext);

hiveql.sql("drop table if exists test")
hiveql.sql("create table test (id int, name string) stored as orc tblproperties(\"transactional\"=\"true\")")
hiveql.sql("insert into test values(1,'Yash')")
hiveql.sql("insert into test values(2,'Yash')")
hiveql.sql("insert into test values(3,'Yash')")
hiveql.sql("select * from test").show()
hiveql.sql("delete from test where id= 1")

问题:

Exception in thread "main" org.apache.spark.sql.catalyst.parser.ParseException: 
Operation not allowed: delete from(line 1, pos 0)

== SQL ==
delete from test where id= 1
^^^

at org.apache.spark.sql.catalyst.parser.ParserUtils$.operationNotAllowed(ParserUtils.scala:39)
at org.apache.spark.sql.execution.SparkSqlAstBuilder$$anonfun$visitFailNativeCommand$1.apply(SparkSqlParser.scala:925)
at org.apache.spark.sql.execution.SparkSqlAstBuilder$$anonfun$visitFailNativeCommand$1.apply(SparkSqlParser.scala:916)
at org.apache.spark.sql.catalyst.parser.ParserUtils$.withOrigin(ParserUtils.scala:93)
at org.apache.spark.sql.execution.SparkSqlAstBuilder.visitFailNativeCommand(SparkSqlParser.scala:916)
at org.apache.spark.sql.execution.SparkSqlAstBuilder.visitFailNativeCommand(SparkSqlParser.scala:52)
at org.apache.spark.sql.catalyst.parser.SqlBaseParser$FailNativeCommandContext.accept(SqlBaseParser.java:952)
at org.antlr.v4.runtime.tree.AbstractParseTreeVisitor.visit(AbstractParseTreeVisitor.java:42)
at org.apache.spark.sql.catalyst.parser.AstBuilder$$anonfun$visitSingleStatement$1.apply(AstBuilder.scala:66)
at org.apache.spark.sql.catalyst.parser.AstBuilder$$anonfun$visitSingleStatement$1.apply(AstBuilder.scala:66)
at org.apache.spark.sql.catalyst.parser.ParserUtils$.withOrigin(ParserUtils.scala:93)
at org.apache.spark.sql.catalyst.parser.AstBuilder.visitSingleStatement(AstBuilder.scala:65)
at org.apache.spark.sql.catalyst.parser.AbstractSqlParser$$anonfun$parsePlan$1.apply(ParseDriver.scala:54)
at org.apache.spark.sql.catalyst.parser.AbstractSqlParser$$anonfun$parsePlan$1.apply(ParseDriver.scala:53)
at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parse(ParseDriver.scala:82)
at org.apache.spark.sql.execution.SparkSqlParser.parse(SparkSqlParser.scala:45)
at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parsePlan(ParseDriver.scala:53)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:699)
at main.scala.InitMain$.delayedEndpoint$main$scala$InitMain$1(InitMain.scala:41)
at main.scala.InitMain$delayedInit$body.apply(InitMain.scala:9)
at scala.Function0$class.apply$mcV$sp(Function0.scala:34)
at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
at scala.App$$anonfun$main$1.apply(App.scala:76)
at scala.App$$anonfun$main$1.apply(App.scala:76)
at scala.collection.immutable.List.foreach(List.scala:381)
at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
at scala.App$class.main(App.scala:76)
at main.scala.InitMain$.main(InitMain.scala:9)
at main.scala.InitMain.main(InitMain.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)

更新查询也有同样的问题。

现在我已经经历了 This , This , update query in Spark SQL , This , This和许多其他人。

我知道 Spark 不支持更新/删除,但我所处的情况需要使用这两个操作。任何人都可以以某种方式提出建议/提供帮助。

最佳答案

一个性能不太好的解决方法是

  1. 加载现有数据(我建议使用 DataFrame API)
  2. 使用更新/删除的记录创建新的 DataFrame
  3. 将 DataFrame 重写到磁盘

为 Hive 表选择适当的分区可以最大限度地减少重写的数据量。

关于java - 如何更新/删除 Spark-hive 中的数据?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43652072/

相关文章:

java - 使用反射覆盖 2 + 2 = 5

hadoop - 配置单元外部表需要写访问权限

java - Spring从yml文件加载Map值

java - 如何使用php将参数传递给java?

generics - 参数化类型的特定生成器

java - hadoop hive 并发计数

python - Hive自定义脚本是否允许2个或更多的reducer?

java - Java 中的 UTF-8 到 EBCDIC

java - 将速度和位置 vector 转换为经度和纬度?

scala - 如何使用 FastParse 精确匹配给定字符 'n'