postgresql - 如何使用maven在spark中包含jdbc jar

标签 postgresql scala apache-spark jdbc sbt

我有一个使用 postgres jdbc 驱动程序的 spark (2.1.0) 作业,如下所述:https://spark.apache.org/docs/latest/sql-programming-guide.html#jdbc-to-other-databases

我正在使用像

这样的数据框编写器
val jdbcURL = s"jdbc:postgresql://${config.pgHost}:${config.pgPort}/${config.pgDatabase}?user=${config.pgUser}&password=${config.pgPassword}"
val connectionProperties = new Properties()
connectionProperties.put("user", config.pgUser)
connectionProperties.put("password", config.pgPassword)
dataFrame.write.mode(SaveMode.Overwrite).jdbc(jdbcURL, tableName, connectionProperties)

我已成功包含来自 https://jdbc.postgresql.org/download/postgresql-42.1.1.jar 的 jdbc 驱动程序手动下载并使用 --jars postgresql-42.1.1.jar --driver-class-path postgresql-42.1.1.jar

但是,我宁愿不必先下载它。

我没有成功尝试 --jars https://jdbc.postgresql.org/download/postgresql-42.1.1.jar,但是失败了

Exception in thread "main" java.io.IOException: No FileSystem for scheme: http
    at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2584)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
    at org.apache.spark.deploy.yarn.Client.copyFileToRemote(Client.scala:364)
    at org.apache.spark.deploy.yarn.Client.org$apache$spark$deploy$yarn$Client$$distribute$1(Client.scala:480)
    at org.apache.spark.deploy.yarn.Client$$anonfun$prepareLocalResources$11$$anonfun$apply$8.apply(Client.scala:600)
    at org.apache.spark.deploy.yarn.Client$$anonfun$prepareLocalResources$11$$anonfun$apply$8.apply(Client.scala:599)
    at scala.collection.mutable.ArraySeq.foreach(ArraySeq.scala:74)
    at org.apache.spark.deploy.yarn.Client$$anonfun$prepareLocalResources$11.apply(Client.scala:599)
    at org.apache.spark.deploy.yarn.Client$$anonfun$prepareLocalResources$11.apply(Client.scala:598)
    at scala.collection.immutable.List.foreach(List.scala:381)
    at org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:598)
    at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:868)
    at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:170)
    at org.apache.spark.deploy.yarn.Client.run(Client.scala:1154)
    at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1213)
    at org.apache.spark.deploy.yarn.Client.main(Client.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:738)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

我也试过:

在我的 build.sbt 文件中包含 "org.postgresql"% "postgresql"% "42.1.1"

spark-submit 选项:--repositories https://mvnrepository.com/artifact --packages org.postgresql:postgresql:42.1.1

spark-submit 选项:--repositories https://mvnrepository.com/artifact --conf "spark.jars.packages=org.postgresql:postgresql:42.1.1

这些都以同样的方式失败:

17/08/01 13:14:49 ERROR yarn.ApplicationMaster: User class threw exception: java.sql.SQLException: No suitable driver
java.sql.SQLException: No suitable driver
    at java.sql.DriverManager.getDriver(DriverManager.java:315)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$7.apply(JDBCOptions.scala:84)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$7.apply(JDBCOptions.scala:84)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:83)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:34)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:53)
    at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:426)
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:215)
    at org.apache.spark.sql.DataFrameWriter.jdbc(DataFrameWriter.scala:446)

最佳答案

您可以将 JDBC jar 文件复制到 spark 目录中的 jars 文件夹,然后使用 spark-submit 部署您的应用程序而不使用 -- jar 选项。

关于postgresql - 如何使用maven在spark中包含jdbc jar,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/45447357/

相关文章:

postgresql - Postgres 的 tcp_keepalives_idle 不更新 AWS ELB 空闲超时

scala - akka 类 Receive 定义在哪里?

scala - Akka Streams 中平衡和广播扇出的区别

apache-spark - Spark 如何并行处理 1TB 文件?

游戏框架和 Spark

sql - 每组最大n个略有不同

共享的 SQL 规则/约束

jquery - Pharo Seaside - 如何在 html 中编辑标签后更新数据库条目

scala - Spark-cassandra-connector:toArray 不起作用

apache-spark - Spark2.4.6 没有 hadoop : A JNI error has occurred