dataframe - 为什么在将 csv 加载到 pyspark 数据帧时类型都是字符串?

标签 dataframe pyspark

我有一个包含数字的 csv 文件(其中没有字符串)。
它有 int 和 float 类型。但是当我以这种方式在pyspark中阅读它时:

df = spark.read.csv("s3://s3-cdp-prod-hive/novaya/instacart/data.csv",header=False)

数据框的所有列类型都是字符串。

如何使用int和float自动将其读入数字?

有些列中包含 nan 。在文件中它由 nan 表示
0.18277,-0.188931,0.0893389,0.119931,0.318853,-0.132933,-0.0288816,0.136137,0.12939,-0.245342,0.0608182,0.0802028,-0.00625962,0.271222,0.187855,0.132606,-0.0451533,0.140501,0.0704631,0.0229986,-0.0533376,-0.319643,-0.029321,-0.160937,0.608359,0.0513554,-0.246744,0.0817331,-0.410682,0.210652,0.375154,0.021617,0.119288,0.0674939,0.190642,0.161885,0.0385196,-0.341168,0.138659,-0.236908,0.230963,0.23714,-0.277465,0.242136,0.0165013,0.0462388,0.259744,-0.397228,-0.0143719,0.0891644,0.222225,0.0987765,0.24049,0.357596,-0.106266,-0.216665,0.191123,-0.0164234,0.370766,0.279462,0.46796,-0.0835098,0.112693,0.231951,-0.0942302,-0.178815,0.259096,-0.129323,1165491,175882,16.5708805975,6,0,2.80890261184,4.42114773551,0,23,0,13.4645462866,18.0359037455,11,30.0,0.0,11.4435397208,84.7504967125,30.0,5370,136.0,1.0,9.61508192633,62.2006926209,1,0,0,22340,9676,322.71241867,17.7282900627,1,100,4.24701125287,2.72260519248,0,6,17.9743048247,13.3241271262,0,23,82.4988407009,11.4021333588,0.0,30.0,45.1319021862,7.76284691137,1.0,66.0,9.40127026245,2.30880529144,1,73,0.113021725659,0.264843289305,0.0,0.986301369863,1,30450,0

最佳答案

如您所见 here :

inferSchema – infers the input schema automatically from data. It requires one extra pass over the data. If None is set, it uses the default value, false.



对于 NaN 值,请参阅上面的相同文档:

nanValue – sets the string representation of a non-number value. If None is set, it uses the default value, NaN



通过将 inferSchema 设置为 True,您将获得具有推断类型的数据帧。

这里我举个例子:

CSV 文件:
12,5,8,9
1.0,3,46,NaN

默认情况下,推断架构 是 False 并且所有值都是字符串:
from pyspark.sql.types import *

>>> df = spark.read.csv("prova.csv",header=False) 
>>> df.dtypes
[('_c0', 'string'), ('_c1', 'string'), ('_c2', 'string'), ('_c3', 'string')]

>>> df.show()
+---+---+---+---+
|_c0|_c1|_c2|_c3|
+---+---+---+---+
| 12|  5|  8|  9|
|1.0|  3| 46|NaN|
+---+---+---+---+

如果您设置 推断架构 如真:
>>> df = spark.read.csv("prova.csv",inferSchema =True,header=False) 
>>> df.dtypes
[('_c0', 'double'), ('_c1', 'int'), ('_c2', 'int'), ('_c3', 'double')]


>>> df.show()
+----+---+---+---+
| _c0|_c1|_c2|_c3|
+----+---+---+---+
|12.0|  5|  8|9.0|
| 1.0|  3| 46|NaN|
+----+---+---+---+

关于dataframe - 为什么在将 csv 加载到 pyspark 数据帧时类型都是字符串?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/44625888/

相关文章:

python - 调用返回 FloatType() 的 UDF 时为 "expected zero arguments for construction of ClassDict (for numpy.dtype)"

python - 匹配 DataFrame 列中字符串中的独立单词

python - 转换 pandas 数据框中的日期格式

python - Pandas:如何根据另一列的值创建一列?

Python - 通过灵活长度字典过滤数据帧

python-3.x - 通过正则表达式捕获组拆分 Spark 数据帧列中的字符串

python - 使用现有列作为多索引重新索引数据框

python - 如何使用 spark-submit 和 pyspark 运行 luigi 任务

python - 加入一个庞大而庞大的 Spark 数据框

apache-spark - 为什么我无法加载 PySpark RandomForestClassifier 模型?