我有如下代码并且运行良好。它读取为 Spark 数据帧
April_data = sc.read.parquet('somepath/data.parquet')
type(April_data)
pyspark.sql.dataframe.DataFrame
但是当我尝试读取 pandas df 时,出现错误
df_pp = pd.read_parquet('somepath/data.parquet')
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
/tmp/ipykernel_4244/1910461502.py in <module>
----> 1 df_pp = pd.read_parquet('somepath/data.parquet')
/usr/local/anaconda//parquet.py in read_parquet(path, engine, columns, storage_options, use_nullable_dtypes, **kwargs)
498 storage_options=storage_options,
499 use_nullable_dtypes=use_nullable_dtypes,
--> 500 **kwargs,
501 )
/usr/local/anaconda//io/parquet.py in read(self, path, columns, use_nullable_dtypes, storage_options, **kwargs)
234 kwargs.pop("filesystem", None),
235 storage_options=storage_options,
--> 236 mode="rb",
237 )
238 try:
/usr/local/anaconda/parquet.py in _get_path_or_handle(path, fs, storage_options, mode, is_dir)
100 # this branch is used for example when reading from non-fsspec URLs
101 handles = get_handle(
--> 102 path_or_handle, mode, is_text=False, storage_options=storage_options
103 )
104 fs = None
/usr/local/anaconda/common.py in get_handle(path_or_buf, mode, encoding, compression, memory_map, is_text, errors, storage_options)
709 else:
710 # Binary mode
--> 711 handle = open(handle, ioargs.mode)
712 handles.append(handle)
713
FileNotFoundError: [Errno 2] No such file or directory: 'somepath/data.parquet'
我已经安装了fastparquet
包,如下
!pip install fastparquet
Successfully installed cramjam-2.5.0 fastparquet-0.8.1
# 更新 1
该文件位于 HDFS 中,我可以在查看时看到该文件
hdfs_location = 'somepath/'
!hdfs dfs -ls $hdfs_location
我在同一个文件中运行所有这些代码
最佳答案
根据文档,pandas.read_parquet
与其他同级 IO 模块类似,不支持从 HDFS 位置读取。虽然有read_hdf
,它不读取 parquet 或其他已知格式。
对于 read_parquet
中的字符串值,当前支持 CPU 文件路径或仅在线方案(http、ftp)和两个特定存储路径(Amazon S3 存储桶、Google Cloud Storage 或 GS)。
但是,您可以传递类似文件的对象。因此,请考虑读取所需的 parquet 文件并传递内容。以下是使用各种 HDFS 包的示例:
from hdfs import Client
with client.read('somepath/data.parquet') as f:
df_pp = pd.read_parquet(f.read())
from hdfs3 import HDFileSystem
hdfs = HDFileSystem(host='localhost', port=8020)
with hdfs.open('somepath/data.parquet') as f:
df_pp = pd.read_parquet(f)
此外,fastparquet
支持转换为pandas数据框:
from fastparquet import ParquetFile
pf = ParquetFile('somepath/data.parquet')
df = pf.to_pandas()
关于python - 将 parquet 读取到 pandas FileNotFoundError,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/72308356/