尝试将 postgreSQL DB 转换为 Dataframe 。以下是我的代码:
from pyspark.sql import SparkSession
spark = SparkSession.builder \
.appName("Connect to DB") \
.getOrCreate()
jdbcUrl = "jdbc:postgresql://XXXXXX"
connectionProperties = {
"user" : " ",
"password" : " ",
"driver" : "org.postgresql.Driver"
}
query = "(SELECT table_name FROM information_schema.tables) XXX"
df = spark.read.jdbc(url=jdbcUrl, table=query, properties=connectionProperties)
table_name_list = df.select("table_name").rdd.flatMap(lambda x: x).collect()
for table_name in table_name_list:
df2 = spark.read.jdbc(url=jdbcUrl, table=table_name, properties=connectionProperties)
我收到错误:
java.sql.SQLException: Unsupported type ARRAY on generating df2 for table name
如果我对表名称值进行硬编码,则不会收到相同的错误
df2 = spark.read.jdbc(jdbcUrl,"conditions",properties=connectionProperties)
我检查了 table_name 类型,它是 String ,这是正确的方法吗?
最佳答案
我猜你不想要属于 postgres 内部工作的表名,例如 pg_type
, pg_policies
等等,其架构类型为 pg_catalog
导致错误的原因
py4j.protocol.Py4JJavaError: An error occurred while calling o34.jdbc. : java.sql.SQLException: Unsupported type ARRAY
当你尝试将它们读作
spark.read.jdbc(url=jdbcUrl, table='pg_type', properties=connectionProperties)
并且有诸如applicable_roles
之类的表, view_table_usage
等等,其架构类型为 information_schema
这导致
py4j.protocol.Py4JJavaError: An error occurred while calling o34.jdbc. : org.postgresql.util.PSQLException: ERROR: relation "view_table_usage" does not exist
当你尝试将它们读作
spark.read.jdbc(url=jdbcUrl, table='view_table_usage', properties=connectionProperties)
可以使用上述jdbc命令将模式类型为公共(public)的表读入表中。
block 引用>I checked table_name type and it is String , is this the correct approach ?
因此,您需要过滤掉这些表名称并将逻辑应用为
from pyspark.sql import SparkSession spark = SparkSession.builder \ .appName("Connect to DB") \ .getOrCreate() jdbcUrl = "jdbc:postgresql://hostname:post/" connectionProperties = { "user" : " ", "password" : " ", "driver" : "org.postgresql.Driver" } query = "information_schema.tables" df = spark.read.jdbc(url=jdbcUrl, table=query, properties=connectionProperties) table_name_list = df.filter((df["table_schema"] != 'pg_catalog') & (df["table_schema"] != 'information_schema')).select("table_name").rdd.flatMap(lambda x: x).collect() for table_name in table_name_list: df2 = spark.read.jdbc(url=jdbcUrl, table=table_name, properties=connectionProperties)
这应该有效
关于python - 读取 (Pyspark? 中的 JDBC 源代码时出现不支持的数组错误),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50613977/