apache-spark - 应该如何配置spark sql来访问hive Metastore?

标签 apache-spark hive apache-spark-sql cloudera

这个问题在这里已经有了答案:





How to connect Spark SQL to remote Hive metastore (via thrift protocol) with no hive-site.xml?

(9 个回答)


去年关闭。




我正在尝试使用 Spark SQL 从 Hive Metastore 读取表,但 Spark 给出有关未找到表的错误。恐怕 Spark SQL 会创建一个全新的空元存储。

我通过这个命令提交 spark 任务:

spark-submit --class etl.EIServerSpark --driver-class-path '/opt/cloudera/parcels/CDH/lib/hive/lib/*' --driver-java-options '-Dspark.executor.extraClassPath=/opt/cloudera/parcels/CDH/lib/hive/lib/*' --jars $HIVE_CLASSPATH --files /etc/hive/conf/hive-site.xml,/etc/hadoop/conf/yarn-site.xml --master yarn-client /root/etl.jar

这是错误:
2015-06-30 17:50:51,563 INFO  [main] util.Utils (Logging.scala:logInfo(59)) - Copying /etc/hive/conf/hive-site.xml to /tmp/spark-568de027-8b66-40fa-97a4-2ec50614f486/hive-site.xml
2015-06-30 17:50:51,568 INFO  [main] spark.SparkContext (Logging.scala:logInfo(59)) - Added file file:/etc/hive/conf/hive-site.xml at http://10.136.149.126:43349/files/hive-site.xml with timestamp 1435683051561
2015-06-30 17:50:51,568 INFO  [main] util.Utils (Logging.scala:logInfo(59)) - Copying /etc/hadoop/conf/yarn-site.xml to /tmp/spark-568de027-8b66-40fa-97a4-2ec50614f486/yarn-site.xml
2015-06-30 17:50:51,570 INFO  [main] spark.SparkContext (Logging.scala:logInfo(59)) - Added file file:/etc/hadoop/conf/yarn-site.xml at http://10.136.149.126:43349/files/yarn-site.xml with timestamp 1435683051568
2015-06-30 17:50:51,637 INFO  [sparkDriver-akka.actor.default-dispatcher-5] util.AkkaUtils (Logging.scala:logInfo(59)) - Connecting to HeartbeatReceiver: akka.tcp://sparkDriver@gateway.edp.hadoop:52818/user/HeartbeatReceiver
2015-06-30 17:50:51,756 INFO  [main] netty.NettyBlockTransferService (Logging.scala:logInfo(59)) - Server created on 40198
2015-06-30 17:50:51,757 INFO  [main] storage.BlockManagerMaster (Logging.scala:logInfo(59)) - Trying to register BlockManager
2015-06-30 17:50:51,759 INFO  [sparkDriver-akka.actor.default-dispatcher-2] storage.BlockManagerMasterActor (Logging.scala:logInfo(59)) - Registering block manager localhost:40198 with 265.4 MB RAM, BlockManagerId(<driver>, localhost, 40198)
2015-06-30 17:50:51,761 INFO  [main] storage.BlockManagerMaster (Logging.scala:logInfo(59)) - Registered BlockManager
2015-06-30 17:50:52,840 INFO  [main] parse.ParseDriver (ParseDriver.java:parse(185)) - Parsing command: SELECT id, name FROM eiserver.eismpt
2015-06-30 17:50:53,141 INFO  [main] parse.ParseDriver (ParseDriver.java:parse(206)) - Parse Completed
2015-06-30 17:50:54,041 INFO  [main] metastore.HiveMetaStore (HiveMetaStore.java:newRawStore(502)) - 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
2015-06-30 17:50:54,064 INFO  [main] metastore.ObjectStore (ObjectStore.java:initialize(247)) - ObjectStore, initialize called
2015-06-30 17:50:54,227 WARN  [main] DataNucleus.General (Log4JLogger.java:warn(96)) - Plugin (Bundle) "org.datanucleus.store.rdbms" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/hive/lib/datanucleus-rdbms-3.2.9.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/jars/datanucleus-rdbms-3.2.9.jar."
2015-06-30 17:50:54,268 WARN  [main] DataNucleus.General (Log4JLogger.java:warn(96)) - Plugin (Bundle) "org.datanucleus.api.jdo" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/hive/lib/datanucleus-api-jdo-3.2.6.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/jars/datanucleus-api-jdo-3.2.6.jar."
2015-06-30 17:50:54,274 WARN  [main] DataNucleus.General (Log4JLogger.java:warn(96)) - Plugin (Bundle) "org.datanucleus" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/hive/lib/datanucleus-core-3.2.10.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/jars/datanucleus-core-3.2.10.jar."
2015-06-30 17:50:54,314 INFO  [main] DataNucleus.Persistence (Log4JLogger.java:info(77)) - Property datanucleus.cache.level2 unknown - will be ignored
2015-06-30 17:50:54,315 INFO  [main] DataNucleus.Persistence (Log4JLogger.java:info(77)) - Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
2015-06-30 17:50:56,109 INFO  [main] metastore.ObjectStore (ObjectStore.java:getPMF(318)) - Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
2015-06-30 17:50:56,170 INFO  [main] metastore.MetaStoreDirectSql (MetaStoreDirectSql.java:<init>(110)) - MySQL check failed, assuming we are not on mysql: Lexical error at line 1, column 5.  Encountered: "@" (64), after : "".
2015-06-30 17:50:57,315 INFO  [main] DataNucleus.Datastore (Log4JLogger.java:info(77)) - The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
2015-06-30 17:50:57,316 INFO  [main] DataNucleus.Datastore (Log4JLogger.java:info(77)) - The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
2015-06-30 17:50:57,688 INFO  [main] DataNucleus.Datastore (Log4JLogger.java:info(77)) - The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
2015-06-30 17:50:57,688 INFO  [main] DataNucleus.Datastore (Log4JLogger.java:info(77)) - The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
2015-06-30 17:50:57,842 INFO  [main] DataNucleus.Query (Log4JLogger.java:info(77)) - Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
2015-06-30 17:50:57,844 INFO  [main] metastore.ObjectStore (ObjectStore.java:setConf(230)) - Initialized ObjectStore
2015-06-30 17:50:58,113 INFO  [main] metastore.HiveMetaStore (HiveMetaStore.java:createDefaultRoles(560)) - Added admin role in metastore
2015-06-30 17:50:58,115 INFO  [main] metastore.HiveMetaStore (HiveMetaStore.java:createDefaultRoles(569)) - Added public role in metastore
2015-06-30 17:50:58,198 INFO  [main] metastore.HiveMetaStore (HiveMetaStore.java:addAdminUsers(597)) - No user is added in admin role, since config is empty
2015-06-30 17:50:58,376 INFO  [main] session.SessionState (SessionState.java:start(383)) - No Tez session required at this point. hive.execution.engine=mr.
2015-06-30 17:50:58,525 INFO  [main] metastore.HiveMetaStore (HiveMetaStore.java:logInfo(632)) - 0: get_table : db=eiserver tbl=eismpt
2015-06-30 17:50:58,525 INFO  [main] HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(314)) - ugi=root     ip=unknown-ip-addr      cmd=get_table : db=eiserver tbl=eismpt
2015-06-30 17:50:58,567 ERROR [main] metadata.Hive (Hive.java:getTable(1003)) - NoSuchObjectException(message:eiserver.eismpt table not found)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:1569)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

如何配置 spark sql 以访问部署在 postgres 上的 hive Metastore?我正在使用 CDH 5.3.2。

谢谢

最佳答案

配置 Spark 以使用 Hive Metastore thriftserver:

编辑 $SPARK_HOME/conf/hive-site.xml删除直接连接信息并添加此属性:

<configuration>
  <property>
    <name>hive.metastore.uris</name>
    <value>thrift://localhost:9083</value> /*make sure to replace with your hive-metastore service's thrift url*/
    <description>URI for client to contact metastore server</description>
  </property>
</configuration>

hive-site.xml不在 $SPARK_HOME/conf然后,要连接到 hive Metastore,您需要复制 hive-site.xml文件到 spark/conf 目录。所以以root用户登录后运行以下命令,
cp  /usr/lib/hive/conf/hive-site.xml    /usr/lib/spark/conf/

创建 Hive 上下文

scala> REPL 提示键入以下内容:
import org.apache.spark.sql.hive.HiveContext
val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)

创建 Hive 表
hiveContext.sql("CREATE TABLE IF NOT EXISTS TestTable (key INT, value STRING)")

显示 Hive 表
scala> hiveContext.hql("SHOW TABLES").collect().foreach(println)

测试配置(可选)
  • 使用 cd $SPARK_HOME; sbin/stop-thriftserver.sh 停止 Spark SQL thriftserver
  • 使用 cd;./start-thriftserver.sh 启动 Hive Metastore thriftserver
  • 检查日志 $HIVE_HOME/logs/metastore.out对于任何错误。
  • Spark SQL thriftserver 在成功连接到
    这个服务器,所以它必须正在运行。
  • 启动 Spark SQL thriftserver
    cd $SPARK_HOME; sbin/start-thriftserver.sh检查返回行中指示的日志文件。
  • 你应该看到这样的行:

  • 16/12/29 20:22:19 INFO metastore: Trying to connect to metastore with URI thrift://localhost:9083
    16/12/29 20:22:19 INFO metastore: Connected to metastore.
    


    运行 $SPARK_HOME/bin/beeline -u 'jdbc:hive2://localhost:10000/'并试用 !tables命令以确保您能够列出元数据。

    关于apache-spark - 应该如何配置spark sql来访问hive Metastore?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/31144272/

    相关文章:

    python - 如何根据 pyspark 提供的条件的可变数量进行 F.when

    python - Pyspark 在查找前一行时按组迭代数据帧

    python - impyla 在连接到 HiveServer2 时挂起

    scala - Spark和SparkSQL : How to imitate window function?

    mysql - Spark SQL 和 MySQL- SaveMode.Overwrite 不插入修改后的数据

    java - Spark 数据集 - 读取 CSV 并写入空输出

    scala - 为什么 Spark SQL 将 String "null"转换为 Float/Double 类型的 Object null?

    apache-spark - 将 6000 亿条记录从 1 个配置单元表加载到另一个

    apache-spark - 两个非常相似的 Spark Dataframe 之间性能差异的可能原因

    mongodb - Spark - 如何在map()中创建新的RDD? (对于 Executor,SparkContext 为 null)