hadoop - 在 EMR Spark 上,JDBC 加载第一次失败,然后工作

标签 hadoop apache-spark spark-dataframe emr elastic-map-reduce

我在 AWS Elastic Map Reduce 5.3.1 中使用 spark-shell 和 Spark 2.1.0 从 Postgres 数据库加载数据。 loader.load 总是失败然后成功。为什么会这样?

[hadoop@[SNIP] ~]$ SPARK_PRINT_LAUNCH_COMMAND=1 spark-shell --driver-class-path ~/postgresql-42.0.0.jar 
Spark Command: /etc/alternatives/jre/bin/java -cp /home/hadoop/postgresql-42.0.0.jar:/usr/lib/spark/conf/:/usr/lib/spark/jars/*:/etc/hadoop/conf/ -Dscala.usejavacp=true -Xmx640M -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadingEnabled -XX:OnOutOfMemoryError=kill -9 %p org.apache.spark.deploy.SparkSubmit --conf spark.driver.extraClassPath=/home/hadoop/postgresql-42.0.0.jar --class org.apache.spark.repl.Main --name Spark shell spark-shell
========================================
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
17/02/28 17:17:52 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
17/02/28 17:18:56 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
Spark context Web UI available at http://[SNIP]
Spark context available as 'sc' (master = yarn, app id = application_1487878172787_0014).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.1.0
      /_/

Using Scala version 2.11.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_121)
Type in expressions to have them evaluated.
Type :help for more information.

scala> val loader = spark.read.format("jdbc") // connection options removed
loader: org.apache.spark.sql.DataFrameReader = org.apache.spark.sql.DataFrameReader@46067a74

scala> loader.load
java.sql.SQLException: No suitable driver
  at java.sql.DriverManager.getDriver(DriverManager.java:315)
  at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$7.apply(JDBCOptions.scala:84)
  at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$7.apply(JDBCOptions.scala:84)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:83)
  at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:34)
  at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:32)
  at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:330)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:125)
  ... 48 elided

scala> loader.load
res1: org.apache.spark.sql.DataFrame = [id: int, fsid: string ... 4 more fields]

最佳答案

我也遇到了同样的问题。我正在尝试使用 JDBC 通过 Spark 连接到 vertica。我在用 : Spark 壳 星火版本是2.2.0 java版本是1.8

用于连接的外部 jar : vertica-8.1.1_spark2.1_scala2.11-20170623.jar vertica-jdbc-8.1.1-0.jar

连接代码:

import java.sql.DriverManager
import com.vertica.jdbc.Driver


val jdbcUsername = "<username>"
val jdbcPassword = "<password>"
val jdbcHostname = "<vertica server>"
val jdbcPort = <vertica port>
val jdbcDatabase ="<vertica DB>"
val jdbcUrl = s"jdbc:vertica://${jdbcHostname}:${jdbcPort}/${jdbcDatabase}?user=${jdbcUsername}&password=${jdbcPassword}"

val connectionProperties = new Properties()
connectionProperties.put("user", jdbcUsername)
connectionProperties.put("password", jdbcPassword )

val connection = DriverManager.getConnection(jdbcUrl, connectionProperties)
java.sql.SQLException: No suitable driver found for jdbc:vertica://${jdbcHostname}:${jdbcPort}/${jdbcDatabase}?user=${jdbcUsername}&password=${jdbcPassword}

  at java.sql.DriverManager.getConnection(Unknown Source)
  at java.sql.DriverManager.getConnection(Unknown Source)
  ... 56 elided

如果我第二次运行相同的命令,我会得到以下输出并建立连接

scala> val connection = DriverManager.getConnection(jdbcUrl, connectionProperties)
connection: java.sql.Connection = com.vertica.jdbc.VerticaJdbc4ConnectionImpl@7d994c

关于hadoop - 在 EMR Spark 上,JDBC 加载第一次失败,然后工作,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/42515185/

相关文章:

java - ambari中namenode启动失败

java - Hadoop 进程的 jps 命令

python - PySpark jdbc谓词错误: Py4JError: An error occurred while calling o108. jdbc

apache-spark - 在Spark中获取组的最后一个值

apache-spark - apache spark sql中的等效percentile_cont函数

java - 在 Java 中读取 ORC 文件

java - 级联驱动的自托管版本服务器错误

apache-spark - 通过迭代另一个大 RDD 来过滤大 RDD - pySpark

scala - 我如何否定 spark scala 中的 isin 方法

python - 将 GraphFrames ShortestPath Map 转换为 PySpark 中的 DataFrame 行