我正在尝试在 Postgresql 中存储具有嵌套架构的数据帧。有人可以帮我解释一下如何在 Postgres 中存储列(坐标)和(user_mentions)吗?我读过 postgres 可以存储数组类型,但在尝试写入 DB 时遇到错误。我不完全确定我的表是否正确创建。
错误:
Exception in thread "main" java.lang.IllegalArgumentException: Can't get JDBC type for array<array<double>>
数据帧架构:
root
|-- created_at: string (nullable = true)
|-- id: long (nullable = true)
|-- text: string (nullable = true)
|-- source: string (nullable = true)
|-- user_id: long (nullable = true)
|-- in_reply_to_status_id: string (nullable = true)
|-- in_reply_to_user_id: long (nullable = true)
|-- lang: string (nullable = true)
|-- retweet_count: long (nullable = true)
|-- reply_count: long (nullable = true)
|-- coordinates: array (nullable = true)
| |-- element: array (containsNull = true)
| | |-- element: double (containsNull = true)
|-- hashtags: array (nullable = true)
| |-- element: string (containsNull = true)
|-- user_mentions: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- id: long (nullable = true)
| | |-- id_str: string (nullable = true)
| | |-- indices: array (nullable = true)
| | | |-- element: long (containsNull = true)
| | |-- name: string (nullable = true)
| | |-- screen_name: string (nullable = true)
Postgres 表创建:
create table test-table (created_at varchar, id int, text text, source text, user_id int, in_reply_to_status_id varchar, in_reply_to_user_id int, lang varchar, retweet_count int, reply_count int, coordinates double precision[][], hashtags text[], user_mentions text[]);
Spark Scala 代码:
val df_1 = df.select(col("created_at"), col("id"), col("text"), col("source"), col("user.id").as("user_id"),
col("in_reply_to_status_id"), col("in_reply_to_user_id"),
col("lang"), col("retweet_count"), col("reply_count"), col("place.bounding_box.coordinates"),
col("entities.hashtags"), col("entities.user_mentions")).withColumn("coordinates", explode(col("coordinates")))
df_1.show(truncate = false)
df_1.printSchema()
df_1.write
.format("jdbc")
.option("url", "postgres_url")
.option("dbtable", "xxx.mytable")
.option("user", "user")
.option("password", "pass")
.save()
示例输入:
坐标列:
[[80.063341, 26.348309], [80.063341, 30.43339], [88.2027, 30.43339], [88.2027, 26.348309]]
用户提及:
[[123456789, 123456789, [0, 15], Name, ScreenName]]
最佳答案
Spark 仅支持使用 JDBC 读写单维数组。您可以将数据转换为多行(即分解
它以在多行中包含 double[]),或者您可以将数据从 double[][]
转换为逗号分隔的 string[]
或纯 string
。
例如[[1, 2], [3, 4]]
可以转换为 ["1,2", "3,4"]
关于arrays - 如何在Postgresql中插入具有列数组<array<double>>的数据框?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/61695787/