How can I use pd.read_csv() to iteratively chunk through a file and retain the dtype and other meta-information as if I read in the entire dataset at once?
我需要读入一个太大而无法放入内存的数据集。我想使用 pd.read_csv 导入文件,然后立即将 block 附加到 HDFStore 中。但是,数据类型推断对后续 block 一无所知。
如果存储在表中的第一个 block 仅包含 int 而后续 block 包含 float,则会引发异常。因此,我需要首先使用 read_csv 遍历数据帧并保留最高推断类型。此外,对于对象类型,我需要保留最大长度,因为它们将作为字符串存储在表中。
是否有一种 pandonic 方法可以只保留这些信息而不读取整个数据集?
最佳答案
我没想到会这么直观,否则我不会发布这个问题。但再一次, Pandas 让事情变得轻而易举。但是,保留这个问题,因为此信息可能对其他处理大数据的人有用:
In [1]: chunker = pd.read_csv('DATASET.csv', chunksize=500, header=0)
# Store the dtypes of each chunk into a list and convert it to a dataframe:
In [2]: dtypes = pd.DataFrame([chunk.dtypes for chunk in chunker])
In [3]: dtypes.values[:5]
Out[3]:
array([[int64, int64, int64, object, int64, int64, int64, int64],
[int64, int64, int64, int64, int64, int64, int64, int64],
[int64, int64, int64, int64, int64, int64, int64, int64],
[int64, int64, int64, int64, int64, int64, int64, int64],
[int64, int64, int64, int64, int64, int64, int64, int64]], dtype=object)
# Very cool that I can take the max of these data types and it will preserve the hierarchy:
In [4]: dtypes.max().values
Out[4]: array([int64, int64, int64, object, int64, int64, int64, int64], dtype=object)
# I can now store the above into a dictionary:
types = dtypes.max().to_dict()
# And pass it into pd.read_csv fo the second run:
chunker = pd.read_csv('tree_prop_dset.csv', dtype=types, chunksize=500)
关于python - 使用 chunksize 迭代地获取推断的数据帧类型,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/15555005/